Test Report: Docker_Linux_docker_arm64 21701

                    
                      39a663ec30ddfd049b0783b78fdfbb9970ee2a8a:2025-10-06:41791
                    
                

Test fail (7/347)

Order failed test Duration
29 TestAddons/serial/Volcano 524.12
37 TestAddons/parallel/Ingress 492.56
41 TestAddons/parallel/CSI 391.43
44 TestAddons/parallel/LocalPath 345.52
91 TestFunctional/parallel/DashboardCmd 302.33
98 TestFunctional/parallel/ServiceCmdConnect 603.3
100 TestFunctional/parallel/PersistentVolumeClaim 249.24
x
+
TestAddons/serial/Volcano (524.12s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 92.608066ms
addons_test.go:876: volcano-admission stabilized in 92.723256ms
addons_test.go:868: volcano-scheduler stabilized in 92.826245ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-wqkfr" [ec2b7536-52d2-4e9f-8b80-d23d6f5bc7de] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
helpers_test.go:352: "volcano-scheduler-76c996c8bf-wqkfr" [ec2b7536-52d2-4e9f-8b80-d23d6f5bc7de] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5m17.005016297s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-2szwf" [737ac497-9742-4adb-9277-f15da9c02148] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.00351912s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-l5t58" [4d5ece43-3691-4dfd-9154-b05a0ba6f569] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0035884s
addons_test.go:903: (dbg) Run:  kubectl --context addons-006450 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-006450 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-006450 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [b9b10c39-50d5-4b81-bc84-afbdbd30c824] Pending
helpers_test.go:352: "test-job-nginx-0" [b9b10c39-50d5-4b81-bc84-afbdbd30c824] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:935: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:935: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006450 -n addons-006450
addons_test.go:935: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-10-06 14:31:57.059973657 +0000 UTC m=+695.857842428
addons_test.go:935: (dbg) Run:  kubectl --context addons-006450 describe po test-job-nginx-0 -n my-volcano
addons_test.go:935: (dbg) kubectl --context addons-006450 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             addons-006450/192.168.49.2
Start Time:       Mon, 06 Oct 2025 14:28:57 +0000
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-1f8792fc-57b1-4293-acee-731d9de07970
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               10.244.0.27
IPs:
IP:           10.244.0.27
Controlled By:  Job/test-job
Containers:
nginx:
Container ID:  
Image:         nginx:latest
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
10m
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-645g9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-645g9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From     Message
----     ------     ----                   ----     -------
Normal   Scheduled  3m                     volcano  Successfully assigned my-volcano/test-job-nginx-0 to addons-006450
Warning  Failed     2m20s (x3 over 2m59s)  kubelet  Failed to pull image "nginx:latest": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    20s (x10 over 2m59s)   kubelet  Back-off pulling image "nginx:latest"
Warning  Failed     20s (x10 over 2m59s)   kubelet  Error: ImagePullBackOff
Normal   Pulling    5s (x5 over 2m59s)     kubelet  Pulling image "nginx:latest"
Warning  Failed     4s (x5 over 2m59s)     kubelet  Error: ErrImagePull
Warning  Failed     4s (x2 over 98s)       kubelet  Failed to pull image "nginx:latest": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
addons_test.go:935: (dbg) Run:  kubectl --context addons-006450 logs test-job-nginx-0 -n my-volcano
addons_test.go:935: (dbg) Non-zero exit: kubectl --context addons-006450 logs test-job-nginx-0 -n my-volcano: exit status 1 (107.354305ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "test-job-nginx-0" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:935: kubectl --context addons-006450 logs test-job-nginx-0 -n my-volcano: exit status 1
addons_test.go:936: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/serial/Volcano]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-006450
helpers_test.go:243: (dbg) docker inspect addons-006450:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90",
	        "Created": "2025-10-06T14:21:00.2900908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 806507,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:21:00.391293391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/hostname",
	        "HostsPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/hosts",
	        "LogPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90-json.log",
	        "Name": "/addons-006450",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-006450:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-006450",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90",
	                "LowerDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec-init/diff:/var/lib/docker/overlay2/e377610d56c190eb4e6f5af0c002c2b677875f0d15e22ba07535ade05d2c2018/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-006450",
	                "Source": "/var/lib/docker/volumes/addons-006450/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-006450",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-006450",
	                "name.minikube.sigs.k8s.io": "addons-006450",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09ddbf4aed5db91393a32b35522feed3626a6a03e08f6e0448ebb5aad5998ddd",
	            "SandboxKey": "/var/run/docker/netns/09ddbf4aed5d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37510"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37508"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37509"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-006450": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:f4:99:c4:a9:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "165f6e38041442732f4da1d95818020ddb3d0bf16ac6242c03ef818c1b73d7fb",
	                    "EndpointID": "b2523cc159053c0b4c03cccafdf39f8b82bb8b5c7e911427f39eed28857482fc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-006450",
	                        "fedf355814c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-006450 -n addons-006450
helpers_test.go:252: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 logs -n 25: (1.408547669s)
helpers_test.go:260: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-379615 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-379615   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-379615                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-379615   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ -o=json --download-only -p download-only-023239 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-023239   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-023239                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-023239   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-379615                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-379615   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-023239                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-023239   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ --download-only -p download-docker-403886 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-403886 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ -p download-docker-403886                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-403886 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ --download-only -p binary-mirror-859483 --alsologtostderr --binary-mirror http://127.0.0.1:42473 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-859483   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ -p binary-mirror-859483                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-859483   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ addons  │ enable dashboard -p addons-006450                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-006450                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ start   │ -p addons-006450 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:23 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:20:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:20:33.934280  806109 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:20:33.934452  806109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:33.934482  806109 out.go:374] Setting ErrFile to fd 2...
	I1006 14:20:33.934503  806109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:33.934791  806109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 14:20:33.935342  806109 out.go:368] Setting JSON to false
	I1006 14:20:33.936278  806109 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":75786,"bootTime":1759684648,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1006 14:20:33.936380  806109 start.go:140] virtualization:  
	I1006 14:20:33.939820  806109 out.go:179] * [addons-006450] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 14:20:33.942845  806109 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:20:33.942925  806109 notify.go:220] Checking for updates...
	I1006 14:20:33.949235  806109 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:20:33.952125  806109 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:20:33.955049  806109 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	I1006 14:20:33.957833  806109 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 14:20:33.960596  806109 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:20:33.963595  806109 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:20:33.986303  806109 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 14:20:33.986439  806109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:34.050609  806109 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-06 14:20:34.04143491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:34.050721  806109 docker.go:318] overlay module found
	I1006 14:20:34.053842  806109 out.go:179] * Using the docker driver based on user configuration
	I1006 14:20:34.056712  806109 start.go:304] selected driver: docker
	I1006 14:20:34.056733  806109 start.go:924] validating driver "docker" against <nil>
	I1006 14:20:34.056748  806109 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:20:34.057477  806109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:34.111822  806109 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-06 14:20:34.102783115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:34.111982  806109 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:20:34.112211  806109 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:20:34.115275  806109 out.go:179] * Using Docker driver with root privileges
	I1006 14:20:34.118173  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:20:34.118253  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:20:34.118263  806109 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1006 14:20:34.118342  806109 start.go:348] cluster config:
	{Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1006 14:20:34.121483  806109 out.go:179] * Starting "addons-006450" primary control-plane node in "addons-006450" cluster
	I1006 14:20:34.124347  806109 cache.go:123] Beginning downloading kic base image for docker with docker
	I1006 14:20:34.127249  806109 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:20:34.130100  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:20:34.130168  806109 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1006 14:20:34.130177  806109 cache.go:58] Caching tarball of preloaded images
	I1006 14:20:34.130222  806109 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:20:34.130282  806109 preload.go:233] Found /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1006 14:20:34.130293  806109 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1006 14:20:34.130624  806109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json ...
	I1006 14:20:34.130655  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json: {Name:mk78082a38967c23c9e0fec5499d829d2aa5600d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:20:34.149434  806109 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 14:20:34.149575  806109 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 14:20:34.149597  806109 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1006 14:20:34.149602  806109 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1006 14:20:34.149610  806109 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1006 14:20:34.149626  806109 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1006 14:20:52.383725  806109 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1006 14:20:52.383777  806109 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:20:52.383807  806109 start.go:360] acquireMachinesLock for addons-006450: {Name:mk6a488a7fef2004d8c41401b261288db1a55041 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:20:52.383940  806109 start.go:364] duration metric: took 111.276µs to acquireMachinesLock for "addons-006450"
	I1006 14:20:52.383972  806109 start.go:93] Provisioning new machine with config: &{Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1006 14:20:52.384058  806109 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:20:52.387398  806109 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1006 14:20:52.387686  806109 start.go:159] libmachine.API.Create for "addons-006450" (driver="docker")
	I1006 14:20:52.387754  806109 client.go:168] LocalClient.Create starting
	I1006 14:20:52.387880  806109 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem
	I1006 14:20:52.755986  806109 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem
	I1006 14:20:54.000215  806109 cli_runner.go:164] Run: docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:20:54.021843  806109 cli_runner.go:211] docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:20:54.021935  806109 network_create.go:284] running [docker network inspect addons-006450] to gather additional debugging logs...
	I1006 14:20:54.021951  806109 cli_runner.go:164] Run: docker network inspect addons-006450
	W1006 14:20:54.038245  806109 cli_runner.go:211] docker network inspect addons-006450 returned with exit code 1
	I1006 14:20:54.038287  806109 network_create.go:287] error running [docker network inspect addons-006450]: docker network inspect addons-006450: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-006450 not found
	I1006 14:20:54.038299  806109 network_create.go:289] output of [docker network inspect addons-006450]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-006450 not found
	
	** /stderr **
	I1006 14:20:54.038438  806109 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:20:54.055471  806109 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d4c380}
	I1006 14:20:54.055517  806109 network_create.go:124] attempt to create docker network addons-006450 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:20:54.055572  806109 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-006450 addons-006450
	I1006 14:20:54.110341  806109 network_create.go:108] docker network addons-006450 192.168.49.0/24 created
	I1006 14:20:54.110371  806109 kic.go:121] calculated static IP "192.168.49.2" for the "addons-006450" container
	I1006 14:20:54.110459  806109 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:20:54.127884  806109 cli_runner.go:164] Run: docker volume create addons-006450 --label name.minikube.sigs.k8s.io=addons-006450 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:20:54.148808  806109 oci.go:103] Successfully created a docker volume addons-006450
	I1006 14:20:54.148892  806109 cli_runner.go:164] Run: docker run --rm --name addons-006450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --entrypoint /usr/bin/test -v addons-006450:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:20:56.324467  806109 cli_runner.go:217] Completed: docker run --rm --name addons-006450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --entrypoint /usr/bin/test -v addons-006450:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.175532295s)
	I1006 14:20:56.324511  806109 oci.go:107] Successfully prepared a docker volume addons-006450
	I1006 14:20:56.324545  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:20:56.324566  806109 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:20:56.324627  806109 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-006450:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:21:00.168028  806109 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-006450:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (3.843356071s)
	I1006 14:21:00.168062  806109 kic.go:203] duration metric: took 3.843492791s to extract preloaded images to volume ...
	W1006 14:21:00.168228  806109 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 14:21:00.168353  806109 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:21:00.269120  806109 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-006450 --name addons-006450 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-006450 --network addons-006450 --ip 192.168.49.2 --volume addons-006450:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:21:00.667135  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Running}}
	I1006 14:21:00.686913  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:00.708915  806109 cli_runner.go:164] Run: docker exec addons-006450 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:21:00.766467  806109 oci.go:144] the created container "addons-006450" has a running status.
	I1006 14:21:00.766496  806109 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa...
	I1006 14:21:01.209222  806109 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:21:01.244403  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:01.278442  806109 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:21:01.278462  806109 kic_runner.go:114] Args: [docker exec --privileged addons-006450 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:21:01.342721  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:01.366223  806109 machine.go:93] provisionDockerMachine start ...
	I1006 14:21:01.366312  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.386115  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.388381  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.388404  806109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:21:01.583723  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006450
	
	I1006 14:21:01.583748  806109 ubuntu.go:182] provisioning hostname "addons-006450"
	I1006 14:21:01.583829  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.604321  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.604631  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.604648  806109 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-006450 && echo "addons-006450" | sudo tee /etc/hostname
	I1006 14:21:01.762558  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006450
	
	I1006 14:21:01.762702  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.783081  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.783379  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.783396  806109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-006450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-006450/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-006450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:21:01.932033  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:21:01.932056  806109 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-803497/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-803497/.minikube}
	I1006 14:21:01.932087  806109 ubuntu.go:190] setting up certificates
	I1006 14:21:01.932101  806109 provision.go:84] configureAuth start
	I1006 14:21:01.932162  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:01.953264  806109 provision.go:143] copyHostCerts
	I1006 14:21:01.953391  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/ca.pem (1082 bytes)
	I1006 14:21:01.953509  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/cert.pem (1123 bytes)
	I1006 14:21:01.953572  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/key.pem (1675 bytes)
	I1006 14:21:01.953642  806109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem org=jenkins.addons-006450 san=[127.0.0.1 192.168.49.2 addons-006450 localhost minikube]
	I1006 14:21:02.364998  806109 provision.go:177] copyRemoteCerts
	I1006 14:21:02.365098  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:21:02.365155  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.381521  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:02.475833  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:21:02.494054  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:21:02.512540  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1006 14:21:02.530771  806109 provision.go:87] duration metric: took 598.646522ms to configureAuth
	I1006 14:21:02.530795  806109 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:21:02.531031  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:02.531089  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.548485  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.548797  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.548814  806109 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1006 14:21:02.680553  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1006 14:21:02.680572  806109 ubuntu.go:71] root file system type: overlay
	I1006 14:21:02.680735  806109 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1006 14:21:02.680812  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.697880  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.698189  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.698287  806109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1006 14:21:02.846019  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1006 14:21:02.846167  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.863632  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.864002  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.864029  806109 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1006 14:21:03.799164  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:56:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-06 14:21:02.840466123 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1006 14:21:03.799202  806109 machine.go:96] duration metric: took 2.432959766s to provisionDockerMachine
	I1006 14:21:03.799214  806109 client.go:171] duration metric: took 11.411453149s to LocalClient.Create
	I1006 14:21:03.799235  806109 start.go:167] duration metric: took 11.41157629s to libmachine.API.Create "addons-006450"
	I1006 14:21:03.799246  806109 start.go:293] postStartSetup for "addons-006450" (driver="docker")
	I1006 14:21:03.799257  806109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:21:03.799333  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:21:03.799381  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:03.817018  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:03.911433  806109 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:21:03.914606  806109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:21:03.914683  806109 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:21:03.914699  806109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/addons for local assets ...
	I1006 14:21:03.914767  806109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/files for local assets ...
	I1006 14:21:03.914795  806109 start.go:296] duration metric: took 115.542737ms for postStartSetup
	I1006 14:21:03.915135  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:03.931532  806109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json ...
	I1006 14:21:03.931854  806109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:21:03.931910  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:03.948768  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.041025  806109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:21:04.046229  806109 start.go:128] duration metric: took 11.662156071s to createHost
	I1006 14:21:04.046252  806109 start.go:83] releasing machines lock for "addons-006450", held for 11.662297525s
	I1006 14:21:04.046327  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:04.063754  806109 ssh_runner.go:195] Run: cat /version.json
	I1006 14:21:04.063815  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:04.063893  806109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:21:04.063975  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:04.082777  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.099024  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.268948  806109 ssh_runner.go:195] Run: systemctl --version
	I1006 14:21:04.275561  806109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:21:04.279819  806109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:21:04.279895  806109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:21:04.306291  806109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 14:21:04.306318  806109 start.go:495] detecting cgroup driver to use...
	I1006 14:21:04.306351  806109 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:21:04.306446  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:04.320125  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1006 14:21:04.329116  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1006 14:21:04.338037  806109 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1006 14:21:04.338156  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1006 14:21:04.347404  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:21:04.357144  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1006 14:21:04.366129  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:21:04.374845  806109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:21:04.382821  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1006 14:21:04.391940  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1006 14:21:04.400832  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1006 14:21:04.409604  806109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:21:04.417019  806109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:21:04.424313  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:04.532131  806109 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1006 14:21:04.625905  806109 start.go:495] detecting cgroup driver to use...
	I1006 14:21:04.625977  806109 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:21:04.626053  806109 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1006 14:21:04.640910  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:04.654413  806109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:21:04.685901  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:04.698603  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1006 14:21:04.711790  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:04.725497  806109 ssh_runner.go:195] Run: which cri-dockerd
	I1006 14:21:04.729345  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1006 14:21:04.737737  806109 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1006 14:21:04.751393  806109 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1006 14:21:04.873692  806109 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1006 14:21:04.984971  806109 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1006 14:21:04.985108  806109 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1006 14:21:05.002843  806109 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1006 14:21:05.020602  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:05.142830  806109 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1006 14:21:05.525909  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:21:05.538352  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1006 14:21:05.551902  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:21:05.567756  806109 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1006 14:21:05.691941  806109 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1006 14:21:05.814431  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:05.934017  806109 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1006 14:21:05.949991  806109 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1006 14:21:05.962662  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:06.092789  806109 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1006 14:21:06.164834  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:21:06.178359  806109 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1006 14:21:06.178520  806109 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1006 14:21:06.182231  806109 start.go:563] Will wait 60s for crictl version
	I1006 14:21:06.182343  806109 ssh_runner.go:195] Run: which crictl
	I1006 14:21:06.185820  806109 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:21:06.209958  806109 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1006 14:21:06.210077  806109 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:21:06.232534  806109 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:21:06.261297  806109 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1006 14:21:06.261408  806109 cli_runner.go:164] Run: docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:21:06.277505  806109 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:21:06.281321  806109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:21:06.291363  806109 kubeadm.go:883] updating cluster {Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:21:06.291470  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:21:06.291533  806109 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:21:06.310531  806109 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1006 14:21:06.310560  806109 docker.go:621] Images already preloaded, skipping extraction
	I1006 14:21:06.310627  806109 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:21:06.329469  806109 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1006 14:21:06.329494  806109 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:21:06.329511  806109 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1006 14:21:06.329612  806109 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-006450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:21:06.329683  806109 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1006 14:21:06.383455  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:21:06.383492  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:21:06.383512  806109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:21:06.383538  806109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-006450 NodeName:addons-006450 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:21:06.383695  806109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-006450"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:21:06.383769  806109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:21:06.391605  806109 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:21:06.391780  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:21:06.399572  806109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1006 14:21:06.412296  806109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:21:06.425462  806109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1006 14:21:06.438424  806109 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:21:06.442129  806109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:21:06.452170  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:06.565870  806109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:06.583339  806109 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450 for IP: 192.168.49.2
	I1006 14:21:06.583363  806109 certs.go:195] generating shared ca certs ...
	I1006 14:21:06.583383  806109 certs.go:227] acquiring lock for ca certs: {Name:mk78547ccc35462965e66385811a001935f7f131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.583518  806109 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key
	I1006 14:21:06.758169  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt ...
	I1006 14:21:06.758199  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt: {Name:mke50bad3f8d3d8c6fc7003f3930a8a3fa326b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.758398  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key ...
	I1006 14:21:06.758412  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key: {Name:mk5abe63bfac59b481f1b34a2e6312b79c376290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.758508  806109 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key
	I1006 14:21:07.226648  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt ...
	I1006 14:21:07.226681  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt: {Name:mk35f86863953865131b747e65133218cef7ac69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:07.226896  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key ...
	I1006 14:21:07.226910  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key: {Name:mk32f77223b3be8cca86a275e013030fd8c48071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:07.227011  806109 certs.go:257] generating profile certs ...
	I1006 14:21:07.227078  806109 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key
	I1006 14:21:07.227095  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt with IP's: []
	I1006 14:21:08.232319  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt ...
	I1006 14:21:08.232348  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: {Name:mk237396132558310e9472dccd1a03e68855c562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.232531  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key ...
	I1006 14:21:08.232540  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key: {Name:mkddc2eaac1b60c97f1b0888b122f0d14ff81585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.232614  806109 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa
	I1006 14:21:08.232629  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 14:21:08.361861  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa ...
	I1006 14:21:08.361891  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa: {Name:mk44f5f6071204e4219adaa4cbde67bf1f671150 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.362071  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa ...
	I1006 14:21:08.362085  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa: {Name:mkaddbc6367afe0cdf204382e298fb821349ec34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.362173  806109 certs.go:382] copying /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa -> /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt
	I1006 14:21:08.362251  806109 certs.go:386] copying /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa -> /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key
	I1006 14:21:08.362308  806109 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key
	I1006 14:21:08.362337  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt with IP's: []
	I1006 14:21:09.174420  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt ...
	I1006 14:21:09.174451  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt: {Name:mk6a018d5a25b41127abffe602062c5fb3c9da1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:09.174648  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key ...
	I1006 14:21:09.174662  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key: {Name:mk882903eb03fda7b8a7b7a45601eaab350263dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:09.174869  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:21:09.174912  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:21:09.174936  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:21:09.174963  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem (1675 bytes)
	I1006 14:21:09.175647  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:21:09.195248  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 14:21:09.214696  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:21:09.234148  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 14:21:09.252534  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 14:21:09.270877  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:21:09.289342  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:21:09.307151  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:21:09.325295  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:21:09.343473  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:21:09.356830  806109 ssh_runner.go:195] Run: openssl version
	I1006 14:21:09.363194  806109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:21:09.371688  806109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.375519  806109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 14:21 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.375603  806109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.421333  806109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:21:09.430436  806109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:09.434631  806109 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:21:09.434680  806109 kubeadm.go:400] StartCluster: {Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:09.434811  806109 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1006 14:21:09.456777  806109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:21:09.465021  806109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:21:09.473033  806109 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:21:09.473109  806109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:21:09.480866  806109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:21:09.480886  806109 kubeadm.go:157] found existing configuration files:
	
	I1006 14:21:09.480957  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:21:09.488809  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:21:09.488875  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:21:09.496674  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:21:09.504791  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:21:09.504865  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:21:09.512822  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:21:09.520596  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:21:09.520672  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:21:09.528333  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:21:09.536500  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:21:09.536573  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:21:09.544325  806109 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:21:09.582751  806109 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:21:09.582817  806109 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:21:09.609398  806109 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:21:09.609476  806109 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 14:21:09.609518  806109 kubeadm.go:318] OS: Linux
	I1006 14:21:09.609570  806109 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:21:09.609625  806109 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 14:21:09.609679  806109 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:21:09.609733  806109 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:21:09.609792  806109 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:21:09.609847  806109 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:21:09.609902  806109 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:21:09.609955  806109 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:21:09.610011  806109 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 14:21:09.690823  806109 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:21:09.690944  806109 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:21:09.691059  806109 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:21:09.716052  806109 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:21:09.722414  806109 out.go:252]   - Generating certificates and keys ...
	I1006 14:21:09.722525  806109 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:21:09.722604  806109 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:21:10.515752  806109 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:21:11.397580  806109 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:21:12.455188  806109 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:21:12.900218  806109 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:21:13.333042  806109 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:21:13.333192  806109 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-006450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:21:13.558599  806109 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:21:13.558992  806109 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-006450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:21:14.483025  806109 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:21:15.088755  806109 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:21:15.636700  806109 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:21:15.637033  806109 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:21:16.739302  806109 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:21:17.694897  806109 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:21:18.343756  806109 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:21:18.712603  806109 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:21:19.266809  806109 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:21:19.267485  806109 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:21:19.270758  806109 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:21:19.274504  806109 out.go:252]   - Booting up control plane ...
	I1006 14:21:19.274628  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:21:19.274721  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:21:19.275790  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:21:19.292829  806109 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:21:19.293280  806109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:21:19.301074  806109 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:21:19.301395  806109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:21:19.301643  806109 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:21:19.440373  806109 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:21:19.440504  806109 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:21:20.940044  806109 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501293606s
	I1006 14:21:20.940318  806109 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:21:20.940416  806109 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:21:20.940516  806109 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:21:20.940602  806109 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:21:24.828532  806109 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.887425512s
	I1006 14:21:27.037731  806109 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.097440124s
	I1006 14:21:27.942161  806109 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001481359s
	I1006 14:21:27.961418  806109 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 14:21:27.977744  806109 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 14:21:27.992347  806109 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 14:21:27.992563  806109 kubeadm.go:318] [mark-control-plane] Marking the node addons-006450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 14:21:28.013758  806109 kubeadm.go:318] [bootstrap-token] Using token: e1p0fh.afy23ij81unzzcb1
	I1006 14:21:28.016851  806109 out.go:252]   - Configuring RBAC rules ...
	I1006 14:21:28.016992  806109 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 14:21:28.022251  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 14:21:28.031560  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 14:21:28.036500  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 14:21:28.041064  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 14:21:28.048112  806109 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 14:21:28.349107  806109 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 14:21:28.790402  806109 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 14:21:29.351014  806109 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 14:21:29.352283  806109 kubeadm.go:318] 
	I1006 14:21:29.352364  806109 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 14:21:29.352375  806109 kubeadm.go:318] 
	I1006 14:21:29.352461  806109 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 14:21:29.352472  806109 kubeadm.go:318] 
	I1006 14:21:29.352498  806109 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 14:21:29.352567  806109 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 14:21:29.352625  806109 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 14:21:29.352634  806109 kubeadm.go:318] 
	I1006 14:21:29.352691  806109 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 14:21:29.352700  806109 kubeadm.go:318] 
	I1006 14:21:29.352750  806109 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 14:21:29.352759  806109 kubeadm.go:318] 
	I1006 14:21:29.352815  806109 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 14:21:29.352899  806109 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 14:21:29.352974  806109 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 14:21:29.352983  806109 kubeadm.go:318] 
	I1006 14:21:29.353071  806109 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 14:21:29.353153  806109 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 14:21:29.353161  806109 kubeadm.go:318] 
	I1006 14:21:29.353249  806109 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token e1p0fh.afy23ij81unzzcb1 \
	I1006 14:21:29.353360  806109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:76fb571382ca9706d46d85899e8a2e961f0c518218722f3b163e5bd4963fb9a1 \
	I1006 14:21:29.353397  806109 kubeadm.go:318] 	--control-plane 
	I1006 14:21:29.353406  806109 kubeadm.go:318] 
	I1006 14:21:29.353495  806109 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 14:21:29.353503  806109 kubeadm.go:318] 
	I1006 14:21:29.353588  806109 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token e1p0fh.afy23ij81unzzcb1 \
	I1006 14:21:29.353698  806109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:76fb571382ca9706d46d85899e8a2e961f0c518218722f3b163e5bd4963fb9a1 
	I1006 14:21:29.356907  806109 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 14:21:29.357135  806109 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 14:21:29.357260  806109 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:21:29.357283  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:21:29.357298  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:21:29.360240  806109 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1006 14:21:29.363197  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1006 14:21:29.371108  806109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1006 14:21:29.386109  806109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 14:21:29.386176  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:29.386250  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-006450 minikube.k8s.io/updated_at=2025_10_06T14_21_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=addons-006450 minikube.k8s.io/primary=true
	I1006 14:21:29.530062  806109 ops.go:34] apiserver oom_adj: -16
	I1006 14:21:29.530192  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:30.031190  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:30.530267  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:31.030839  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:31.530611  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:32.030258  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:32.530722  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:33.030864  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:33.530331  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:34.030732  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:34.138751  806109 kubeadm.go:1113] duration metric: took 4.752637843s to wait for elevateKubeSystemPrivileges
	I1006 14:21:34.138779  806109 kubeadm.go:402] duration metric: took 24.704102384s to StartCluster
	I1006 14:21:34.138798  806109 settings.go:142] acquiring lock: {Name:mk86d6d1803b10e0f74b7ca9be175f37419eb162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:34.138932  806109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:21:34.139342  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/kubeconfig: {Name:mkd0e7dce0fefee9d8326b7f5e1280f715df58c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:34.139547  806109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1006 14:21:34.139652  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 14:21:34.139913  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:34.139945  806109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1006 14:21:34.140026  806109 addons.go:69] Setting yakd=true in profile "addons-006450"
	I1006 14:21:34.140047  806109 addons.go:238] Setting addon yakd=true in "addons-006450"
	I1006 14:21:34.140069  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.140558  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.140784  806109 addons.go:69] Setting inspektor-gadget=true in profile "addons-006450"
	I1006 14:21:34.140802  806109 addons.go:238] Setting addon inspektor-gadget=true in "addons-006450"
	I1006 14:21:34.140825  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.141217  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.141581  806109 addons.go:69] Setting metrics-server=true in profile "addons-006450"
	I1006 14:21:34.141646  806109 addons.go:238] Setting addon metrics-server=true in "addons-006450"
	I1006 14:21:34.141685  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.142139  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.143205  806109 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-006450"
	I1006 14:21:34.143238  806109 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-006450"
	I1006 14:21:34.143270  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.143806  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.144933  806109 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-006450"
	I1006 14:21:34.144962  806109 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-006450"
	I1006 14:21:34.144997  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.145499  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.146720  806109 addons.go:69] Setting cloud-spanner=true in profile "addons-006450"
	I1006 14:21:34.146748  806109 addons.go:238] Setting addon cloud-spanner=true in "addons-006450"
	I1006 14:21:34.146777  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.147335  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.156945  806109 addons.go:69] Setting registry=true in profile "addons-006450"
	I1006 14:21:34.157043  806109 addons.go:238] Setting addon registry=true in "addons-006450"
	I1006 14:21:34.157131  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.157718  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.176071  806109 addons.go:69] Setting registry-creds=true in profile "addons-006450"
	I1006 14:21:34.176145  806109 addons.go:238] Setting addon registry-creds=true in "addons-006450"
	I1006 14:21:34.176197  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.176774  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.185281  806109 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-006450"
	I1006 14:21:34.185740  806109 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-006450"
	I1006 14:21:34.185846  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.187060  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.193152  806109 addons.go:69] Setting storage-provisioner=true in profile "addons-006450"
	I1006 14:21:34.193188  806109 addons.go:238] Setting addon storage-provisioner=true in "addons-006450"
	I1006 14:21:34.193224  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.193707  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.207765  806109 addons.go:69] Setting default-storageclass=true in profile "addons-006450"
	I1006 14:21:34.207813  806109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-006450"
	I1006 14:21:34.208233  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.208517  806109 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-006450"
	I1006 14:21:34.208563  806109 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-006450"
	I1006 14:21:34.208903  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.218653  806109 addons.go:69] Setting volcano=true in profile "addons-006450"
	I1006 14:21:34.219019  806109 addons.go:238] Setting addon volcano=true in "addons-006450"
	I1006 14:21:34.219129  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.219730  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.219851  806109 addons.go:69] Setting gcp-auth=true in profile "addons-006450"
	I1006 14:21:34.219900  806109 mustload.go:65] Loading cluster: addons-006450
	I1006 14:21:34.220156  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:34.220463  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.244567  806109 addons.go:69] Setting volumesnapshots=true in profile "addons-006450"
	I1006 14:21:34.244607  806109 addons.go:238] Setting addon volumesnapshots=true in "addons-006450"
	I1006 14:21:34.244648  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.245166  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.256667  806109 out.go:179] * Verifying Kubernetes components...
	I1006 14:21:34.256935  806109 addons.go:69] Setting ingress=true in profile "addons-006450"
	I1006 14:21:34.256960  806109 addons.go:238] Setting addon ingress=true in "addons-006450"
	I1006 14:21:34.257001  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.257557  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.285413  806109 addons.go:69] Setting ingress-dns=true in profile "addons-006450"
	I1006 14:21:34.285459  806109 addons.go:238] Setting addon ingress-dns=true in "addons-006450"
	I1006 14:21:34.285510  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.286061  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.332782  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1006 14:21:34.338069  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1006 14:21:34.338156  806109 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1006 14:21:34.338257  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.357721  806109 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1006 14:21:34.362166  806109 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 14:21:34.362235  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1006 14:21:34.362331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.380568  806109 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1006 14:21:34.383806  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1006 14:21:34.383934  806109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1006 14:21:34.384103  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.384670  806109 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1006 14:21:34.393975  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1006 14:21:34.394079  806109 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1006 14:21:34.394248  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.420035  806109 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1006 14:21:34.423442  806109 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 14:21:34.423541  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1006 14:21:34.423642  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.431543  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:34.457975  806109 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1006 14:21:34.497876  806109 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1006 14:21:34.498037  806109 out.go:179]   - Using image docker.io/registry:3.0.0
	I1006 14:21:34.510678  806109 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1006 14:21:34.519256  806109 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 14:21:34.519362  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1006 14:21:34.519521  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.526420  806109 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1006 14:21:34.526447  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1006 14:21:34.526546  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.528693  806109 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1006 14:21:34.528724  806109 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1006 14:21:34.528812  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.532917  806109 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1006 14:21:34.536266  806109 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1006 14:21:34.537209  806109 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1006 14:21:34.537230  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1006 14:21:34.537331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.542063  806109 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-006450"
	I1006 14:21:34.542107  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.542545  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.581749  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1006 14:21:34.585130  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:34.588025  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:34.590892  806109 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 14:21:34.590917  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1006 14:21:34.591008  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.605945  806109 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 14:21:34.605973  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1006 14:21:34.606041  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.626809  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.628682  806109 addons.go:238] Setting addon default-storageclass=true in "addons-006450"
	I1006 14:21:34.628721  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.629125  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.636774  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1006 14:21:34.640152  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.649003  806109 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1006 14:21:34.649626  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.656019  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1006 14:21:34.658838  806109 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1006 14:21:34.664662  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.676340  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1006 14:21:34.676611  806109 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1006 14:21:34.703838  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.723458  806109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:21:34.726631  806109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:34.726657  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:21:34.726743  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.752688  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1006 14:21:34.756756  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1006 14:21:34.760053  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1006 14:21:34.763938  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1006 14:21:34.769389  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1006 14:21:34.772287  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1006 14:21:34.772317  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1006 14:21:34.772394  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.772747  806109 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1006 14:21:34.772787  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1006 14:21:34.772862  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.804304  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.808420  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.822462  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.823147  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.867044  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.870362  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.874341  806109 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1006 14:21:34.876981  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.878063  806109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:34.878079  806109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:21:34.878140  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.888089  806109 out.go:179]   - Using image docker.io/busybox:stable
	I1006 14:21:34.891239  806109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 14:21:34.891265  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1006 14:21:34.891331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.920306  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.945324  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 14:21:34.947994  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	W1006 14:21:34.970150  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:34.970251  806109 retry.go:31] will retry after 147.40402ms: ssh: handshake failed: EOF
	W1006 14:21:34.972537  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:34.972566  806109 retry.go:31] will retry after 281.687683ms: ssh: handshake failed: EOF
	I1006 14:21:34.975793  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:35.005444  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	W1006 14:21:35.009771  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.009812  806109 retry.go:31] will retry after 207.774831ms: ssh: handshake failed: EOF
	I1006 14:21:35.012483  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:35.127149  806109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1006 14:21:35.219409  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.219491  806109 retry.go:31] will retry after 414.252414ms: ssh: handshake failed: EOF
	W1006 14:21:35.255517  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.255595  806109 retry.go:31] will retry after 378.429324ms: ssh: handshake failed: EOF
	I1006 14:21:35.851743  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 14:21:35.853206  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 14:21:35.989160  806109 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:35.989181  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1006 14:21:36.111352  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1006 14:21:36.151070  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1006 14:21:36.151165  806109 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1006 14:21:36.192781  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1006 14:21:36.192855  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1006 14:21:36.226627  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1006 14:21:36.226690  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1006 14:21:36.243375  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 14:21:36.255630  806109 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1006 14:21:36.255746  806109 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1006 14:21:36.350477  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1006 14:21:36.350562  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1006 14:21:36.377661  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 14:21:36.396057  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:36.399305  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:36.426714  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1006 14:21:36.426796  806109 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1006 14:21:36.427640  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:36.435627  806109 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1006 14:21:36.435647  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1006 14:21:36.443471  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 14:21:36.479083  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 14:21:36.481831  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1006 14:21:36.481904  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1006 14:21:36.527849  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1006 14:21:36.527927  806109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1006 14:21:36.537515  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1006 14:21:36.537591  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1006 14:21:36.597935  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1006 14:21:36.598000  806109 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1006 14:21:36.601149  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1006 14:21:36.790553  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 14:21:36.790647  806109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1006 14:21:36.821053  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1006 14:21:36.821135  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1006 14:21:36.867220  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1006 14:21:36.871426  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1006 14:21:36.871504  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1006 14:21:36.880338  806109 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.753102328s)
	I1006 14:21:36.880515  806109 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.935150087s)
	I1006 14:21:36.880679  806109 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1006 14:21:36.881380  806109 node_ready.go:35] waiting up to 6m0s for node "addons-006450" to be "Ready" ...
	I1006 14:21:36.887470  806109 node_ready.go:49] node "addons-006450" is "Ready"
	I1006 14:21:36.887509  806109 node_ready.go:38] duration metric: took 6.110221ms for node "addons-006450" to be "Ready" ...
	I1006 14:21:36.887526  806109 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:21:36.887614  806109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:21:36.891551  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 14:21:37.041224  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1006 14:21:37.041263  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1006 14:21:37.185540  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1006 14:21:37.185582  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1006 14:21:37.245756  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1006 14:21:37.245794  806109 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1006 14:21:37.320678  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1006 14:21:37.384934  806109 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-006450" context rescaled to 1 replicas
	I1006 14:21:37.439254  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1006 14:21:37.439280  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1006 14:21:37.491833  806109 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:37.491853  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1006 14:21:37.710140  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.858315722s)
	I1006 14:21:37.710258  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.856978431s)
	I1006 14:21:37.797019  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1006 14:21:37.797087  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1006 14:21:38.055462  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.944020191s)
	I1006 14:21:38.066071  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:38.209415  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1006 14:21:38.209495  806109 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1006 14:21:38.308015  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1006 14:21:38.308047  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1006 14:21:38.731766  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1006 14:21:38.731811  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1006 14:21:38.884673  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 14:21:38.884702  806109 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1006 14:21:39.201324  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 14:21:42.056707  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1006 14:21:42.056850  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:42.096992  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:43.527695  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.284260443s)
	I1006 14:21:43.527736  806109 addons.go:479] Verifying addon ingress=true in "addons-006450"
	I1006 14:21:43.527908  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.150170305s)
	I1006 14:21:43.528008  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.131874449s)
	W1006 14:21:43.528029  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:43.528050  806109 retry.go:31] will retry after 227.873764ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:43.528137  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.128758076s)
	I1006 14:21:43.528185  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.100376481s)
	I1006 14:21:43.528469  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.084972148s)
	I1006 14:21:43.528566  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.04940419s)
	I1006 14:21:43.528706  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.927477657s)
	I1006 14:21:43.528726  806109 addons.go:479] Verifying addon registry=true in "addons-006450"
	I1006 14:21:43.532546  806109 out.go:179] * Verifying ingress addon...
	I1006 14:21:43.534069  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1006 14:21:43.534935  806109 out.go:179] * Verifying registry addon...
	I1006 14:21:43.537759  806109 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1006 14:21:43.540886  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1006 14:21:43.565742  806109 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1006 14:21:43.565781  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:43.568676  806109 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 14:21:43.568708  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1006 14:21:43.576208  806109 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1006 14:21:43.749034  806109 addons.go:238] Setting addon gcp-auth=true in "addons-006450"
	I1006 14:21:43.749121  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:43.749685  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:43.756132  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:43.787457  806109 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1006 14:21:43.787548  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:43.815805  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:44.114671  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:44.115253  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:44.548438  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:44.550543  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:45.046803  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:45.049237  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:45.581293  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:45.581847  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.153351  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:46.153798  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.640887  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.643861  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.081245  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.081634  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:47.568674  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.569175  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.056720  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:48.057131  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.585162  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.717857623s)
	I1006 14:21:48.585271  806109 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (11.697643759s)
	I1006 14:21:48.585318  806109 api_server.go:72] duration metric: took 14.445740723s to wait for apiserver process to appear ...
	I1006 14:21:48.585343  806109 api_server.go:88] waiting for apiserver healthz status ...
	I1006 14:21:48.585375  806109 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1006 14:21:48.585803  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.694205832s)
	I1006 14:21:48.585856  806109 addons.go:479] Verifying addon metrics-server=true in "addons-006450"
	I1006 14:21:48.585929  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.265223311s)
	I1006 14:21:48.586329  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.520142743s)
	W1006 14:21:48.586371  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 14:21:48.586391  806109 retry.go:31] will retry after 354.82385ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 14:21:48.586570  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.385202699s)
	I1006 14:21:48.586585  806109 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-006450"
	I1006 14:21:48.590422  806109 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-006450 service yakd-dashboard -n yakd-dashboard
	
	I1006 14:21:48.592576  806109 out.go:179] * Verifying csi-hostpath-driver addon...
	I1006 14:21:48.597670  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1006 14:21:48.614206  806109 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1006 14:21:48.647358  806109 api_server.go:141] control plane version: v1.34.1
	I1006 14:21:48.647389  806109 api_server.go:131] duration metric: took 62.022744ms to wait for apiserver health ...
	I1006 14:21:48.647399  806109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 14:21:48.648507  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:48.648899  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.690542  806109 system_pods.go:59] 19 kube-system pods found
	I1006 14:21:48.690881  806109 system_pods.go:61] "coredns-66bc5c9577-5b26c" [b2fadab4-223c-4127-ae78-2734411d72b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:21:48.690920  806109 system_pods.go:61] "coredns-66bc5c9577-z6nm4" [7fc2de03-9a40-4426-8af4-1216ed30bad3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1006 14:21:48.690960  806109 system_pods.go:61] "csi-hostpath-attacher-0" [f5fb1d05-3f2a-4b8a-b2ed-df5688d53301] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 14:21:48.690990  806109 system_pods.go:61] "csi-hostpath-resizer-0" [03b524e2-88a1-4c1c-9014-8b60efd178c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 14:21:48.691016  806109 system_pods.go:61] "csi-hostpathplugin-jdxpx" [dee0a0f1-55fc-4b8c-8e11-deef46bcb09b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 14:21:48.691053  806109 system_pods.go:61] "etcd-addons-006450" [68d8971d-a245-46ed-aeea-b6c95eaaa5a1] Running
	I1006 14:21:48.691073  806109 system_pods.go:61] "kube-apiserver-addons-006450" [859fa9a9-9411-46dc-a7a4-6f90f229bcb7] Running
	I1006 14:21:48.691092  806109 system_pods.go:61] "kube-controller-manager-addons-006450" [de781030-92f3-4acc-81f4-6ea4d01e03a7] Running
	I1006 14:21:48.691138  806109 system_pods.go:61] "kube-ingress-dns-minikube" [ed71a121-1938-4fcd-98ba-91506484a2ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 14:21:48.691163  806109 system_pods.go:61] "kube-proxy-rr8rw" [081a658a-cae9-4fff-a7ca-ec779b247fb7] Running
	I1006 14:21:48.691184  806109 system_pods.go:61] "kube-scheduler-addons-006450" [74dbd2fb-a5c2-463a-b49f-0d6b7ab88301] Running
	I1006 14:21:48.691218  806109 system_pods.go:61] "metrics-server-85b7d694d7-s77t8" [4560d030-72e2-4fed-b2fb-5a3edfe4178c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 14:21:48.691244  806109 system_pods.go:61] "nvidia-device-plugin-daemonset-d29s2" [0c163bb6-be86-4968-b8c1-96839618f3ac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 14:21:48.691266  806109 system_pods.go:61] "registry-66898fdd98-btgr2" [90bfa3d6-9f89-4227-b3ef-d98d9fadd197] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 14:21:48.691302  806109 system_pods.go:61] "registry-creds-764b6fb674-gxwfl" [a8521a0d-ed5a-452c-9fe0-94e6798668f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 14:21:48.691330  806109 system_pods.go:61] "registry-proxy-wd7b6" [638a84e3-6fae-4413-aa77-31014a85ff29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 14:21:48.691354  806109 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6bdv2" [4cd0ea0b-af7f-46f8-bd9b-8082dfd0fba4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.691391  806109 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8stqh" [d3201aa7-7b51-4180-abc6-274d440ee6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.691417  806109 system_pods.go:61] "storage-provisioner" [8e59991a-c6eb-407e-bacd-d535ad3d89b9] Running
	I1006 14:21:48.691437  806109 system_pods.go:74] duration metric: took 44.032107ms to wait for pod list to return data ...
	I1006 14:21:48.691473  806109 default_sa.go:34] waiting for default service account to be created ...
	I1006 14:21:48.690844  806109 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 14:21:48.691711  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:48.780129  806109 default_sa.go:45] found service account: "default"
	I1006 14:21:48.780207  806109 default_sa.go:55] duration metric: took 88.709889ms for default service account to be created ...
	I1006 14:21:48.780231  806109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 14:21:48.888790  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.132593822s)
	W1006 14:21:48.888876  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:48.888908  806109 retry.go:31] will retry after 467.080472ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:48.888970  806109 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.101487907s)
	I1006 14:21:48.892596  806109 system_pods.go:86] 19 kube-system pods found
	I1006 14:21:48.892682  806109 system_pods.go:89] "coredns-66bc5c9577-5b26c" [b2fadab4-223c-4127-ae78-2734411d72b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:21:48.892707  806109 system_pods.go:89] "coredns-66bc5c9577-z6nm4" [7fc2de03-9a40-4426-8af4-1216ed30bad3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1006 14:21:48.892729  806109 system_pods.go:89] "csi-hostpath-attacher-0" [f5fb1d05-3f2a-4b8a-b2ed-df5688d53301] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 14:21:48.892769  806109 system_pods.go:89] "csi-hostpath-resizer-0" [03b524e2-88a1-4c1c-9014-8b60efd178c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 14:21:48.892792  806109 system_pods.go:89] "csi-hostpathplugin-jdxpx" [dee0a0f1-55fc-4b8c-8e11-deef46bcb09b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 14:21:48.892812  806109 system_pods.go:89] "etcd-addons-006450" [68d8971d-a245-46ed-aeea-b6c95eaaa5a1] Running
	I1006 14:21:48.892844  806109 system_pods.go:89] "kube-apiserver-addons-006450" [859fa9a9-9411-46dc-a7a4-6f90f229bcb7] Running
	I1006 14:21:48.892868  806109 system_pods.go:89] "kube-controller-manager-addons-006450" [de781030-92f3-4acc-81f4-6ea4d01e03a7] Running
	I1006 14:21:48.892892  806109 system_pods.go:89] "kube-ingress-dns-minikube" [ed71a121-1938-4fcd-98ba-91506484a2ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 14:21:48.892925  806109 system_pods.go:89] "kube-proxy-rr8rw" [081a658a-cae9-4fff-a7ca-ec779b247fb7] Running
	I1006 14:21:48.892962  806109 system_pods.go:89] "kube-scheduler-addons-006450" [74dbd2fb-a5c2-463a-b49f-0d6b7ab88301] Running
	I1006 14:21:48.892984  806109 system_pods.go:89] "metrics-server-85b7d694d7-s77t8" [4560d030-72e2-4fed-b2fb-5a3edfe4178c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 14:21:48.893021  806109 system_pods.go:89] "nvidia-device-plugin-daemonset-d29s2" [0c163bb6-be86-4968-b8c1-96839618f3ac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 14:21:48.893045  806109 system_pods.go:89] "registry-66898fdd98-btgr2" [90bfa3d6-9f89-4227-b3ef-d98d9fadd197] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 14:21:48.893080  806109 system_pods.go:89] "registry-creds-764b6fb674-gxwfl" [a8521a0d-ed5a-452c-9fe0-94e6798668f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 14:21:48.893105  806109 system_pods.go:89] "registry-proxy-wd7b6" [638a84e3-6fae-4413-aa77-31014a85ff29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 14:21:48.893126  806109 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6bdv2" [4cd0ea0b-af7f-46f8-bd9b-8082dfd0fba4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.893161  806109 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8stqh" [d3201aa7-7b51-4180-abc6-274d440ee6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.893183  806109 system_pods.go:89] "storage-provisioner" [8e59991a-c6eb-407e-bacd-d535ad3d89b9] Running
	I1006 14:21:48.893204  806109 system_pods.go:126] duration metric: took 112.954104ms to wait for k8s-apps to be running ...
	I1006 14:21:48.893238  806109 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 14:21:48.893331  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:21:48.893436  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:48.897290  806109 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1006 14:21:48.900672  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1006 14:21:48.900752  806109 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1006 14:21:48.942085  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:48.960118  806109 system_svc.go:56] duration metric: took 66.871905ms WaitForService to wait for kubelet
	I1006 14:21:48.960199  806109 kubeadm.go:586] duration metric: took 14.820620987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:21:48.960231  806109 node_conditions.go:102] verifying NodePressure condition ...
	I1006 14:21:48.965554  806109 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 14:21:48.965640  806109 node_conditions.go:123] node cpu capacity is 2
	I1006 14:21:48.965667  806109 node_conditions.go:105] duration metric: took 5.41607ms to run NodePressure ...
	I1006 14:21:48.965693  806109 start.go:241] waiting for startup goroutines ...
	I1006 14:21:48.984429  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1006 14:21:48.984493  806109 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1006 14:21:49.062891  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:49.063409  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:49.102274  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:49.109468  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 14:21:49.109495  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1006 14:21:49.163209  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 14:21:49.357126  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:49.543241  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:49.545480  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:49.602876  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:50.041860  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:50.044347  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:50.102201  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:50.541424  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:50.543788  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:50.625651  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.006456  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.064277984s)
	I1006 14:21:51.006543  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.84331281s)
	I1006 14:21:51.010142  806109 addons.go:479] Verifying addon gcp-auth=true in "addons-006450"
	I1006 14:21:51.025044  806109 out.go:179] * Verifying gcp-auth addon...
	I1006 14:21:51.032841  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1006 14:21:51.036529  806109 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1006 14:21:51.036555  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:51.042265  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:51.044526  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:51.102619  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.536647  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:51.544904  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:51.545440  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:51.602200  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.864284  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.507114739s)
	W1006 14:21:51.864377  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.864433  806109 retry.go:31] will retry after 615.286821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.037094  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:52.041054  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:52.043625  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:52.101572  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:52.479941  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:52.536478  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:52.541425  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:52.543774  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:52.600990  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:53.035872  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:53.041098  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:53.043636  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:53.101845  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:53.536239  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:53.536598  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.05658149s)
	W1006 14:21:53.536657  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.536695  806109 retry.go:31] will retry after 1.187113289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.541601  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:53.543552  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:53.602095  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.037487  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:54.042200  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:54.045343  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:54.102498  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.537542  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:54.542167  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:54.544351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:54.602290  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.724667  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:55.036372  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:55.043120  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:55.044769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:55.101792  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:55.536221  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:55.541111  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:55.543457  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:55.601561  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:55.840769  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.116063398s)
	W1006 14:21:55.840813  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:55.840833  806109 retry.go:31] will retry after 947.610718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.036387  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:56.043063  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:56.044685  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:56.101635  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:56.536456  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:56.541501  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:56.543585  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:56.601983  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:56.789245  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:57.036659  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:57.042057  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:57.044676  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:57.102243  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:57.537164  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:57.543103  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:57.544004  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:57.601850  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:57.839191  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.049904578s)
	W1006 14:21:57.839238  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:57.839258  806109 retry.go:31] will retry after 1.03292313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.037616  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:58.041961  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:58.044496  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:58.107912  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:58.536745  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:58.540665  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:58.544634  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:58.601133  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:58.872574  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:59.036224  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:59.041408  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:59.044098  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:59.101370  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:59.536626  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:59.542541  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:59.543654  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:59.601836  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:59.922791  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.050177986s)
	W1006 14:21:59.922823  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:59.922842  806109 retry.go:31] will retry after 2.488598562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:00.043764  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:00.064604  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:00.065064  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:00.129394  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:00.537107  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:00.541010  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:00.543818  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:00.628309  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:01.036861  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:01.043610  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:01.046494  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:01.102249  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:01.537399  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:01.541534  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:01.543844  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:01.601153  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:02.038594  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:02.041768  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:02.044895  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:02.102517  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:02.411855  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:02.535770  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:02.540865  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:02.544524  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:02.601881  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.036514  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:03.041497  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:03.043732  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:03.101053  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.551361  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:03.551723  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:03.552096  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:03.607741  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.821574  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.409680153s)
	W1006 14:22:03.821607  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:03.821626  806109 retry.go:31] will retry after 2.808613429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:04.036608  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:04.042059  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:04.044591  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:04.102238  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:04.537121  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:04.541031  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:04.544043  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:04.638355  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:05.045826  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:05.045915  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:05.046027  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:05.103126  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:05.536935  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:05.541096  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:05.543811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:05.601370  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.037342  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:06.048770  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:06.049575  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:06.102090  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.537158  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:06.541167  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:06.544718  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:06.601939  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.631301  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:07.036903  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:07.041275  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:07.046171  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:07.101990  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:07.537306  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:07.542954  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:07.548030  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:07.602151  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:08.038923  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:08.045713  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:08.048165  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:08.138614  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:08.453750  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.822414187s)
	W1006 14:22:08.453835  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.453869  806109 retry.go:31] will retry after 8.425837281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.536134  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:08.541309  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:08.543203  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:08.601173  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:09.037059  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:09.041277  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:09.043958  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:09.106411  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:09.536191  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:09.540957  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:09.543212  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:09.637335  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:10.038746  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:10.041203  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:10.043968  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:10.101414  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:10.535919  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:10.541593  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:10.544180  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:10.601144  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:11.036181  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:11.041258  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:11.043931  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:11.102062  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:11.536161  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:11.541576  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:11.545106  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:11.601994  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:12.037286  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:12.041743  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:12.043857  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:12.101936  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:12.536252  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:12.542977  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:12.544737  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:12.602418  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:13.037636  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:13.043353  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:13.045541  806109 kapi.go:107] duration metric: took 29.504656348s to wait for kubernetes.io/minikube-addons=registry ...
	I1006 14:22:13.103856  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:13.536010  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:13.541542  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:13.602453  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:14.036799  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:14.041118  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:14.101847  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:14.535955  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:14.540895  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:14.601210  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:15.038047  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:15.042436  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:15.101780  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:15.536551  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:15.541754  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:15.601384  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.036266  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:16.041349  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:16.101883  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.535728  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:16.540993  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:16.601091  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.880118  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:17.036213  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:17.041368  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:17.102032  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:17.536149  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:17.541821  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:17.606226  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:18.037103  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:18.041146  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:18.102447  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:18.125066  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.244891148s)
	W1006 14:22:18.125106  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:18.125137  806109 retry.go:31] will retry after 8.394227584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:18.536459  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:18.541489  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:18.602140  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:19.036341  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:19.041843  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:19.101573  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:19.536129  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:19.541594  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:19.603991  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:20.036705  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:20.040761  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:20.101466  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:20.536346  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:20.541417  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:20.602109  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:21.037009  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:21.042008  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:21.103192  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:21.536872  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:21.545192  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:21.603991  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:22.036447  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:22.041450  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:22.101387  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:22.537530  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:22.547087  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:22.602381  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:23.038711  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:23.047024  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:23.102246  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:23.537465  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:23.542053  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:23.602575  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:24.037716  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:24.041932  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:24.105425  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:24.537009  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:24.540996  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:24.601164  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:25.037218  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:25.041462  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:25.101898  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:25.541274  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:25.541617  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:25.601533  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:26.037202  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:26.041027  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:26.101243  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:26.520530  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:26.537318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:26.541434  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:26.602288  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.036799  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:27.040735  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:27.101318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.536660  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:27.540656  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:27.601312  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.622677  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.102107139s)
	W1006 14:22:27.622764  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.622799  806109 retry.go:31] will retry after 8.964562377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:28.036352  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:28.041655  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:28.101317  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:28.536873  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:28.542495  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:28.601848  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:29.037235  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:29.041321  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:29.101529  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:29.536608  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:29.541988  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:29.601332  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:30.067966  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:30.069628  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:30.102287  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:30.537456  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:30.541607  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:30.605527  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:31.047144  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:31.047366  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:31.102811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:31.540586  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:31.543600  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:31.601318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:32.037351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:32.041560  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:32.101712  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:32.537074  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:32.541459  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:32.637575  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:33.037645  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:33.041762  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:33.101769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:33.537080  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:33.546252  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:33.602460  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:34.049083  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:34.059194  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:34.102644  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:34.536345  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:34.541231  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:34.602566  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:35.036474  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:35.041683  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:35.101153  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:35.536516  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:35.543131  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:35.601301  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:36.040029  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:36.041789  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:36.101554  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:36.536713  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:36.541523  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:36.587821  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:36.637573  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:37.036522  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:37.042208  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:37.101356  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:37.538450  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:37.541912  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:37.601423  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:38.039073  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:38.041963  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:38.107975  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:38.260560  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.672700487s)
	W1006 14:22:38.260650  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:38.260684  806109 retry.go:31] will retry after 28.502029632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:38.537841  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:38.541302  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:38.634080  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:39.042819  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:39.044710  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:39.101819  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:39.536317  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:39.541291  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:39.602171  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:40.063837  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:40.065152  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:40.160263  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:40.536517  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:40.541760  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:40.601589  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:41.035811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:41.040992  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:41.101764  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:41.537386  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:41.541696  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:41.638626  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:42.041509  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:42.042425  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:42.102420  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:42.536866  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:42.540382  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:42.602008  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:43.036485  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:43.041855  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:43.104569  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:43.537538  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:43.541564  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:43.603912  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:44.036751  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:44.041644  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:44.100816  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:44.535598  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:44.540901  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:44.605465  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:45.067085  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:45.085831  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:45.104001  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:45.535733  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:45.541994  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:45.601937  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:46.037039  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:46.042662  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:46.100769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:46.538350  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:46.542984  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:46.601745  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:47.036231  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:47.041572  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:47.101597  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:47.537411  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:47.541447  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:47.601925  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:48.036062  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:48.046387  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:48.106511  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:48.535973  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:48.541411  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:48.602406  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:49.082967  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:49.083089  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:49.101404  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:49.543349  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:49.543936  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:49.606022  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:50.052841  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:50.053282  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:50.101918  806109 kapi.go:107] duration metric: took 1m1.504246684s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1006 14:22:50.536780  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:50.540713  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:51.039833  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:51.041873  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:51.536470  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:51.541280  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:52.036677  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:52.041641  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:52.536085  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:52.540908  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:53.036694  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:53.041925  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:53.536756  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:53.541339  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:54.036706  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:54.041617  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:54.536485  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:54.541468  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:55.054778  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:55.076569  806109 kapi.go:107] duration metric: took 1m11.538807076s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1006 14:22:55.536329  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:56.036624  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:56.535976  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:57.036354  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:57.536109  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:58.037892  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:58.536442  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:59.037351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:59.536233  806109 kapi.go:107] duration metric: took 1m8.503389262s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1006 14:22:59.539324  806109 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-006450 cluster.
	I1006 14:22:59.542088  806109 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1006 14:22:59.544863  806109 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1006 14:23:06.763823  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 14:23:07.625986  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:23:07.626019  806109 retry.go:31] will retry after 17.722294339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:23:25.349291  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 14:23:26.187865  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:26.187971  806109 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:26.191145  806109 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, volcano, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1006 14:23:26.193747  806109 addons.go:514] duration metric: took 1m52.052915825s for enable addons: enabled=[registry-creds amd-gpu-device-plugin cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher volcano metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1006 14:23:26.193810  806109 start.go:246] waiting for cluster config update ...
	I1006 14:23:26.193839  806109 start.go:255] writing updated cluster config ...
	I1006 14:23:26.194174  806109 ssh_runner.go:195] Run: rm -f paused
	I1006 14:23:26.198700  806109 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:23:26.203281  806109 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5b26c" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.213859  806109 pod_ready.go:94] pod "coredns-66bc5c9577-5b26c" is "Ready"
	I1006 14:23:26.213893  806109 pod_ready.go:86] duration metric: took 10.577014ms for pod "coredns-66bc5c9577-5b26c" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.216571  806109 pod_ready.go:83] waiting for pod "etcd-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.223509  806109 pod_ready.go:94] pod "etcd-addons-006450" is "Ready"
	I1006 14:23:26.223539  806109 pod_ready.go:86] duration metric: took 6.938313ms for pod "etcd-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.226276  806109 pod_ready.go:83] waiting for pod "kube-apiserver-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.230877  806109 pod_ready.go:94] pod "kube-apiserver-addons-006450" is "Ready"
	I1006 14:23:26.230912  806109 pod_ready.go:86] duration metric: took 4.607653ms for pod "kube-apiserver-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.233246  806109 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.603009  806109 pod_ready.go:94] pod "kube-controller-manager-addons-006450" is "Ready"
	I1006 14:23:26.603041  806109 pod_ready.go:86] duration metric: took 369.767385ms for pod "kube-controller-manager-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.803580  806109 pod_ready.go:83] waiting for pod "kube-proxy-rr8rw" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.202844  806109 pod_ready.go:94] pod "kube-proxy-rr8rw" is "Ready"
	I1006 14:23:27.202872  806109 pod_ready.go:86] duration metric: took 399.265658ms for pod "kube-proxy-rr8rw" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.402987  806109 pod_ready.go:83] waiting for pod "kube-scheduler-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.803050  806109 pod_ready.go:94] pod "kube-scheduler-addons-006450" is "Ready"
	I1006 14:23:27.803077  806109 pod_ready.go:86] duration metric: took 400.059334ms for pod "kube-scheduler-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.803090  806109 pod_ready.go:40] duration metric: took 1.604355795s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:23:27.868687  806109 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 14:23:27.871326  806109 out.go:179] * Done! kubectl is now configured to use "addons-006450" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 06 14:22:50 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:22:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5bc940ae5deb9c1ac24f12b18d5ae91a91647aecb8f9438806b01dbdcb32c49b/resolv.conf as [nameserver 10.96.0.10 search volcano-system.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:22:53 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:22:53Z" level=info msg="Stop pulling image registry.k8s.io/ingress-nginx/controller:v1.13.3@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd: Status: Downloaded newer image for registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd"
	Oct 06 14:22:54 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:22:54Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: Status: Image is up to date for volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001"
	Oct 06 14:22:55 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:22:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/373e0e0396aeea0a7ead04d0d4d2152fafe728110f66f2e7f5cda74a211a44ab/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:22:55 addons-006450 dockerd[1123]: time="2025-10-06T14:22:55.501066277Z" level=warning msg="reference for unknown type: " digest="sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Oct 06 14:22:58 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:22:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7"
	Oct 06 14:22:58 addons-006450 dockerd[1123]: time="2025-10-06T14:22:58.647493480Z" level=warning msg="reference for unknown type: " digest="sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34" remote="docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Oct 06 14:22:58 addons-006450 dockerd[1123]: time="2025-10-06T14:22:58.757533505Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:23:24 addons-006450 dockerd[1123]: time="2025-10-06T14:23:24.811244345Z" level=warning msg="reference for unknown type: " digest="sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34" remote="docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Oct 06 14:23:24 addons-006450 dockerd[1123]: time="2025-10-06T14:23:24.900983915Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:24:18 addons-006450 dockerd[1123]: time="2025-10-06T14:24:18.809612092Z" level=warning msg="reference for unknown type: " digest="sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34" remote="docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Oct 06 14:24:18 addons-006450 dockerd[1123]: time="2025-10-06T14:24:18.987607996Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:24:18 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:24:18Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: Pulling from volcanosh/vc-scheduler"
	Oct 06 14:25:43 addons-006450 dockerd[1123]: time="2025-10-06T14:25:43.792657280Z" level=warning msg="reference for unknown type: " digest="sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34" remote="docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Oct 06 14:25:43 addons-006450 dockerd[1123]: time="2025-10-06T14:25:43.894678732Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:28:35 addons-006450 dockerd[1123]: time="2025-10-06T14:28:35.793025728Z" level=warning msg="reference for unknown type: " digest="sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34" remote="docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Oct 06 14:28:37 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:28:37Z" level=info msg="Stop pulling image docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: Status: Downloaded newer image for volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34"
	Oct 06 14:28:58 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:28:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/88c62e39df8fd90c76f0a691c5ff63f85cbb39572f961eeb463b6ee72c1ca49b/resolv.conf as [nameserver 10.96.0.10 search my-volcano.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:28:58 addons-006450 dockerd[1123]: time="2025-10-06T14:28:58.528762547Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:29:10 addons-006450 dockerd[1123]: time="2025-10-06T14:29:10.971796357Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:29:37 addons-006450 dockerd[1123]: time="2025-10-06T14:29:37.971799877Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:30:19 addons-006450 dockerd[1123]: time="2025-10-06T14:30:19.080143038Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:30:19 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:30:19Z" level=info msg="Stop pulling image nginx:latest: latest: Pulling from library/nginx"
	Oct 06 14:31:53 addons-006450 dockerd[1123]: time="2025-10-06T14:31:53.093056282Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:31:53 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:31:53Z" level=info msg="Stop pulling image nginx:latest: latest: Pulling from library/nginx"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	0e36cc82fe950       volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34                                               3 minutes ago       Running             volcano-scheduler                        0                   e0aba4b3d93bd       volcano-scheduler-76c996c8bf-wqkfr          volcano-system
	e3ed3d2c79d89       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:94f0c448171b974aab7b4a96d00feb5799b1d69827a738a4f8b4b30c17fb74e7                                 9 minutes ago       Running             gcp-auth                                 0                   373e0e0396aee       gcp-auth-78565c9fb4-xfzjp                   gcp-auth
	8acdf361565a3       volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001                                         9 minutes ago       Running             admission                                0                   5bc940ae5deb9       volcano-admission-6c447bd768-2szwf          volcano-system
	f2a47081481dc       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             9 minutes ago       Running             controller                               0                   bd5557adaf3c6       ingress-nginx-controller-675c5ddd98-k4m4k   ingress-nginx
	e400809cac569       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	9c0d6f72f1f92       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          9 minutes ago       Running             csi-provisioner                          0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	4876f3a9c229a       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            9 minutes ago       Running             liveness-probe                           0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	7f7bdac7cf59b       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           9 minutes ago       Running             hostpath                                 0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	4eeec494d7d9f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                9 minutes ago       Running             node-driver-registrar                    0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	856a19a7b09f4       volcanosh/vc-controller-manager@sha256:8dd7ce0cef2df19afb14ba26bec90e2999a3c0397ebe5c9d75a5f68d1c80d242                                      9 minutes ago       Running             volcano-controllers                      0                   33fad1d340dba       volcano-controllers-6fd4f85cb8-l5t58        volcano-system
	6f25a4d6caf64       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   9 minutes ago       Running             csi-external-health-monitor-controller   0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	0201aae6c64e0       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              9 minutes ago       Running             csi-resizer                              0                   0ca9bd27ecd5a       csi-hostpath-resizer-0                      kube-system
	9e27aa581454c       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             9 minutes ago       Running             csi-attacher                             0                   1a820fa8b56fd       csi-hostpath-attacher-0                     kube-system
	05cbc48ffb51e       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      9 minutes ago       Running             volume-snapshot-controller               0                   4dcf8198ace65       snapshot-controller-7d9fbc56b8-6bdv2        kube-system
	2e1fd961dc8a7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      9 minutes ago       Running             volume-snapshot-controller               0                   47610a948360b       snapshot-controller-7d9fbc56b8-8stqh        kube-system
	02001c5bf8ca9       9a80c0c8eb61c                                                                                                                                9 minutes ago       Exited              patch                                    1                   67b15011fa29d       ingress-nginx-admission-patch-s6s8k         ingress-nginx
	11587ae8b0259       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   9 minutes ago       Exited              create                                   0                   6d0aa0c7acb77       ingress-nginx-admission-create-t2tnf        ingress-nginx
	509e7623ba228       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            9 minutes ago       Running             gadget                                   0                   14032f9fa6ab7       gadget-mwfpm                                gadget
	d3025a0e45236       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        9 minutes ago       Running             yakd                                     0                   510624cc4af1e       yakd-dashboard-5ff678cb9-nfj9q              yakd-dashboard
	0244185030bd7       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         9 minutes ago       Running             minikube-ingress-dns                     0                   5fb11b5433718       kube-ingress-dns-minikube                   kube-system
	385b41735590f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:f832bbe1d48c62de040bd793937eaa0c05d2f945a55376a99c80a4dd9961aeb1                              9 minutes ago       Running             registry-proxy                           0                   8b6806e8a2031       registry-proxy-wd7b6                        kube-system
	7c848b41913dc       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       9 minutes ago       Running             local-path-provisioner                   0                   dd0d4f86343b0       local-path-provisioner-648f6765c9-fmrx9     local-path-storage
	75c19a63dce9b       registry@sha256:3725021071ec9383eb3d87ddbdff9ed602439b3f7c958c9c2fb941049ea6531d                                                             9 minutes ago       Running             registry                                 0                   19e88fa72d39f       registry-66898fdd98-btgr2                   kube-system
	be40b7b342bcc       registry.k8s.io/metrics-server/metrics-server@sha256:89258156d0e9af60403eafd44da9676fd66f600c7934d468ccc17e42b199aee2                        9 minutes ago       Running             metrics-server                           0                   9be9dc44a48ee       metrics-server-85b7d694d7-s77t8             kube-system
	8cf5351cc4642       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               9 minutes ago       Running             cloud-spanner-emulator                   0                   4916510c10c2b       cloud-spanner-emulator-85f6b7fc65-zjsh8     default
	aa8b68706bef2       nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd                                     9 minutes ago       Running             nvidia-device-plugin-ctr                 0                   48071d8f52e3b       nvidia-device-plugin-daemonset-d29s2        kube-system
	59bd3def26ae0       ba04bb24b9575                                                                                                                                10 minutes ago      Running             storage-provisioner                      0                   a23e97739eb30       storage-provisioner                         kube-system
	1f08a0b17053c       138784d87c9c5                                                                                                                                10 minutes ago      Running             coredns                                  0                   41c06ea8e8dab       coredns-66bc5c9577-5b26c                    kube-system
	2c89530d2d498       05baa95f5142d                                                                                                                                10 minutes ago      Running             kube-proxy                               0                   3401ff6190b48       kube-proxy-rr8rw                            kube-system
	9184b772f37f1       7eb2c6ff0c5a7                                                                                                                                10 minutes ago      Running             kube-controller-manager                  0                   431c21e60ec20       kube-controller-manager-addons-006450       kube-system
	16d61d5012e7c       b5f57ec6b9867                                                                                                                                10 minutes ago      Running             kube-scheduler                           0                   a52e4c8396f58       kube-scheduler-addons-006450                kube-system
	e5031a852e78a       43911e833d64d                                                                                                                                10 minutes ago      Running             kube-apiserver                           0                   dc93b2d9f3eda       kube-apiserver-addons-006450                kube-system
	57ec1a2227a7f       a1894772a478e                                                                                                                                10 minutes ago      Running             etcd                                     0                   31b1c12560e88       etcd-addons-006450                          kube-system
	
	
	==> controller_ingress [f2a47081481d] <==
	I1006 14:22:54.615253       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="34" git="v1.34.1" state="clean" commit="93248f9ae092f571eb870b7664c534bfc7d00f03" platform="linux/arm64"
	I1006 14:22:55.092177       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I1006 14:22:55.133013       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I1006 14:22:55.176796       7 nginx.go:273] "Starting NGINX Ingress controller"
	I1006 14:22:55.195604       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"32a45df3-74a5-4ea9-936a-f28fd578ad3e", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I1006 14:22:55.195635       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"099c9900-05c9-44db-81b3-eef52f1f3a83", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I1006 14:22:55.200180       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"d5cccdc5-0996-4162-a64b-e6dd2f57b85c", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I1006 14:22:56.377724       7 nginx.go:319] "Starting NGINX process"
	I1006 14:22:56.377791       7 leaderelection.go:257] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I1006 14:22:56.378131       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1006 14:22:56.378531       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1006 14:22:56.385446       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1006 14:22:56.386528       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-675c5ddd98-k4m4k"
	I1006 14:22:56.394070       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-k4m4k" node="addons-006450"
	I1006 14:22:56.402954       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-k4m4k" node="addons-006450"
	I1006 14:22:56.426765       7 controller.go:228] "Backend successfully reloaded"
	I1006 14:22:56.426836       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I1006 14:22:56.427067       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-k4m4k", UID:"7af3dc46-9579-4103-920a-676be59d642a", APIVersion:"v1", ResourceVersion:"1329", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Release:       v1.13.3
	  Build:         93851f05e61d99eea49140c9be73499a3cb92ccc
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.27.1
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [1f08a0b17053] <==
	[INFO] 10.244.0.7:56542 - 9411 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000095736s
	[INFO] 10.244.0.7:56542 - 60028 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002381032s
	[INFO] 10.244.0.7:56542 - 33336 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002392707s
	[INFO] 10.244.0.7:56542 - 54232 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000178786s
	[INFO] 10.244.0.7:56542 - 7333 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000136449s
	[INFO] 10.244.0.7:33056 - 46078 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000279019s
	[INFO] 10.244.0.7:33056 - 46299 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000298719s
	[INFO] 10.244.0.7:56424 - 24690 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0002456s
	[INFO] 10.244.0.7:56424 - 24468 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000268837s
	[INFO] 10.244.0.7:59046 - 6419 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000205798s
	[INFO] 10.244.0.7:59046 - 6231 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000164198s
	[INFO] 10.244.0.7:57987 - 61663 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001803179s
	[INFO] 10.244.0.7:57987 - 61843 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002072492s
	[INFO] 10.244.0.7:52614 - 11017 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000243541s
	[INFO] 10.244.0.7:52614 - 10853 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000192292s
	[INFO] 10.244.0.26:44951 - 63731 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000272135s
	[INFO] 10.244.0.26:43415 - 16328 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000118021s
	[INFO] 10.244.0.26:39889 - 25486 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139116s
	[INFO] 10.244.0.26:39105 - 18081 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154197s
	[INFO] 10.244.0.26:56273 - 11862 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000274474s
	[INFO] 10.244.0.26:44777 - 21446 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000313833s
	[INFO] 10.244.0.26:47488 - 37580 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00207181s
	[INFO] 10.244.0.26:50437 - 7597 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001591703s
	[INFO] 10.244.0.26:49063 - 42612 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001943943s
	[INFO] 10.244.0.26:39378 - 64309 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00241089s
	
	
	==> describe nodes <==
	Name:               addons-006450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-006450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=addons-006450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T14_21_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-006450
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-006450"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 14:21:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-006450
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 14:31:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 14:29:08 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 14:29:08 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 14:29:08 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 14:29:08 +0000   Mon, 06 Oct 2025 14:21:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-006450
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0364ef7d33ec438ea80b3763bd3b6ccc
	  System UUID:                35426571-e524-4094-b847-4e5d39cdb9e6
	  Boot ID:                    2fc2fcec-a145-448c-8b5d-9e614a6ff2df
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-85f6b7fc65-zjsh8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  gadget                      gadget-mwfpm                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  gcp-auth                    gcp-auth-78565c9fb4-xfzjp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-k4m4k    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-5b26c                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-jdxpx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-006450                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-006450                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-006450        200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-rr8rw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-006450                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-85b7d694d7-s77t8              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         10m
	  kube-system                 nvidia-device-plugin-daemonset-d29s2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 registry-66898fdd98-btgr2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 registry-creds-764b6fb674-gxwfl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 registry-proxy-wd7b6                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-6bdv2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-8stqh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-648f6765c9-fmrx9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  my-volcano                  test-job-nginx-0                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  volcano-system              volcano-admission-6c447bd768-2szwf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  volcano-system              volcano-controllers-6fd4f85cb8-l5t58         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  volcano-system              volcano-scheduler-76c996c8bf-wqkfr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-nfj9q               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             588Mi (7%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-006450 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-006450 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-006450 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m                kubelet          Node addons-006450 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                kubelet          Node addons-006450 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                kubelet          Node addons-006450 status is now: NodeHasSufficientPID
	  Normal   NodeReady                10m                kubelet          Node addons-006450 status is now: NodeReady
	  Normal   RegisteredNode           10m                node-controller  Node addons-006450 event: Registered Node addons-006450 in Controller
	
	
	==> dmesg <==
	[Oct 6 12:53] systemd-journald[226]: Failed to send stream file descriptor to service manager: Connection refused
	[Oct 6 12:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 13:11] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 14:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [57ec1a2227a7] <==
	{"level":"warn","ts":"2025-10-06T14:21:25.112983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.127761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.149184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.170769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.187747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.208847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.304509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:49.763248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:49.777779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.281548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.337199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.387982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.452451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.481768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.595747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.614909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.631591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.664368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.680487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.697752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.764439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.772435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T14:31:23.319583Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1712}
	{"level":"info","ts":"2025-10-06T14:31:23.387482Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1712,"took":"67.368638ms","hash":2638762742,"current-db-size-bytes":8409088,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4431872,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2025-10-06T14:31:23.387544Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2638762742,"revision":1712,"compact-revision":-1}
	
	
	==> gcp-auth [e3ed3d2c79d8] <==
	2025/10/06 14:22:58 GCP Auth Webhook started!
	2025/10/06 14:28:56 Ready to marshal response ...
	2025/10/06 14:28:56 Ready to write response ...
	2025/10/06 14:28:56 Ready to marshal response ...
	2025/10/06 14:28:56 Ready to write response ...
	
	
	==> kernel <==
	 14:31:58 up 21:14,  0 user,  load average: 0.62, 0.95, 2.19
	Linux addons-006450 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [e5031a852e78] <==
	W1006 14:22:03.337169       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1006 14:22:03.382795       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1006 14:22:03.441950       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1006 14:22:03.477558       1 logging.go:55] [core] [Channel #286 SubChannel #287]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1006 14:22:03.571937       1 logging.go:55] [core] [Channel #290 SubChannel #291]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1006 14:22:03.610228       1 logging.go:55] [core] [Channel #294 SubChannel #295]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1006 14:22:03.628337       1 logging.go:55] [core] [Channel #298 SubChannel #299]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1006 14:22:03.659904       1 logging.go:55] [core] [Channel #302 SubChannel #303]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1006 14:22:03.677457       1 logging.go:55] [core] [Channel #306 SubChannel #307]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1006 14:22:03.696994       1 logging.go:55] [core] [Channel #310 SubChannel #311]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1006 14:22:03.742109       1 logging.go:55] [core] [Channel #314 SubChannel #315]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1006 14:22:03.770267       1 logging.go:55] [core] [Channel #318 SubChannel #319]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1006 14:22:17.855370       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.103.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.103.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.103.29:443: connect: connection refused" logger="UnhandledError"
	W1006 14:22:17.855850       1 handler_proxy.go:99] no RequestInfo found in the context
	E1006 14:22:17.855919       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1006 14:22:17.857318       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.103.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.103.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.103.29:443: connect: connection refused" logger="UnhandledError"
	E1006 14:22:17.861943       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.103.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.103.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.103.29:443: connect: connection refused" logger="UnhandledError"
	E1006 14:22:17.883577       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.103.29:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.103.29:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.103.29:443: connect: connection refused" logger="UnhandledError"
	I1006 14:22:18.002057       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1006 14:28:56.471240       1 controller.go:667] quota admission added evaluator for: jobs.batch.volcano.sh
	I1006 14:28:56.510248       1 controller.go:667] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I1006 14:31:26.220232       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [9184b772f37f] <==
	I1006 14:21:33.253378       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 14:21:33.253401       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1006 14:21:33.253414       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 14:21:33.253426       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 14:21:33.253435       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1006 14:21:33.253465       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1006 14:21:33.254105       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1006 14:21:33.253490       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1006 14:21:33.256268       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1006 14:21:33.267599       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1006 14:21:33.267643       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1006 14:21:39.875934       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1006 14:22:03.224213       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1006 14:22:03.224353       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
	I1006 14:22:03.224382       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch.volcano.sh"
	I1006 14:22:03.224407       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
	I1006 14:22:03.224427       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
	I1006 14:22:03.224448       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1006 14:22:03.224469       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
	I1006 14:22:03.224499       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
	I1006 14:22:03.224558       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1006 14:22:03.271908       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1006 14:22:03.282873       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1006 14:22:04.725047       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 14:22:04.883605       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2c89530d2d49] <==
	I1006 14:21:35.738189       1 server_linux.go:53] "Using iptables proxy"
	I1006 14:21:35.837556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 14:21:35.938392       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 14:21:35.938475       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1006 14:21:35.938596       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 14:21:36.026114       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 14:21:36.026170       1 server_linux.go:132] "Using iptables Proxier"
	I1006 14:21:36.061180       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 14:21:36.061523       1 server.go:527] "Version info" version="v1.34.1"
	I1006 14:21:36.061547       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:21:36.062743       1 config.go:200] "Starting service config controller"
	I1006 14:21:36.062767       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 14:21:36.063897       1 config.go:106] "Starting endpoint slice config controller"
	I1006 14:21:36.063910       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 14:21:36.063943       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 14:21:36.063947       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 14:21:36.064746       1 config.go:309] "Starting node config controller"
	I1006 14:21:36.064764       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 14:21:36.064771       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 14:21:36.163641       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 14:21:36.164636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 14:21:36.164662       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [16d61d5012e7] <==
	I1006 14:21:27.016131       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:21:27.020700       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:21:27.020968       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:21:27.021894       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 14:21:27.023886       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1006 14:21:27.029893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 14:21:27.030068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1006 14:21:27.038289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1006 14:21:27.038473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 14:21:27.038518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1006 14:21:27.038557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1006 14:21:27.040442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1006 14:21:27.040803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1006 14:21:27.040860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1006 14:21:27.040908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 14:21:27.040970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 14:21:27.041025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 14:21:27.041090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 14:21:27.041145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 14:21:27.041189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 14:21:27.041328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 14:21:27.041374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1006 14:21:27.041451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 14:21:27.041497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1006 14:21:28.621743       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 14:29:50 addons-006450 kubelet[2258]: E1006 14:29:50.753665    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="b9b10c39-50d5-4b81-bc84-afbdbd30c824"
	Oct 06 14:29:51 addons-006450 kubelet[2258]: E1006 14:29:51.913385    2258 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 06 14:29:51 addons-006450 kubelet[2258]: E1006 14:29:51.913479    2258 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8521a0d-ed5a-452c-9fe0-94e6798668f2-gcr-creds podName:a8521a0d-ed5a-452c-9fe0-94e6798668f2 nodeName:}" failed. No retries permitted until 2025-10-06 14:31:53.913459765 +0000 UTC m=+625.302667299 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/a8521a0d-ed5a-452c-9fe0-94e6798668f2-gcr-creds") pod "registry-creds-764b6fb674-gxwfl" (UID: "a8521a0d-ed5a-452c-9fe0-94e6798668f2") : secret "registry-creds-gcr" not found
	Oct 06 14:30:05 addons-006450 kubelet[2258]: E1006 14:30:05.752068    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="b9b10c39-50d5-4b81-bc84-afbdbd30c824"
	Oct 06 14:30:19 addons-006450 kubelet[2258]: E1006 14:30:19.084658    2258 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 06 14:30:19 addons-006450 kubelet[2258]: E1006 14:30:19.084705    2258 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 06 14:30:19 addons-006450 kubelet[2258]: E1006 14:30:19.084770    2258 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod test-job-nginx-0_my-volcano(b9b10c39-50d5-4b81-bc84-afbdbd30c824): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 06 14:30:19 addons-006450 kubelet[2258]: E1006 14:30:19.084801    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="b9b10c39-50d5-4b81-bc84-afbdbd30c824"
	Oct 06 14:30:23 addons-006450 kubelet[2258]: E1006 14:30:23.753685    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-gxwfl" podUID="a8521a0d-ed5a-452c-9fe0-94e6798668f2"
	Oct 06 14:30:33 addons-006450 kubelet[2258]: I1006 14:30:33.751716    2258 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-btgr2" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 14:30:33 addons-006450 kubelet[2258]: E1006 14:30:33.752716    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="b9b10c39-50d5-4b81-bc84-afbdbd30c824"
	Oct 06 14:30:44 addons-006450 kubelet[2258]: E1006 14:30:44.752533    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="b9b10c39-50d5-4b81-bc84-afbdbd30c824"
	Oct 06 14:30:56 addons-006450 kubelet[2258]: E1006 14:30:56.768574    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="b9b10c39-50d5-4b81-bc84-afbdbd30c824"
	Oct 06 14:30:59 addons-006450 kubelet[2258]: I1006 14:30:59.751939    2258 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-d29s2" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 14:31:05 addons-006450 kubelet[2258]: I1006 14:31:05.752541    2258 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wd7b6" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 14:31:11 addons-006450 kubelet[2258]: E1006 14:31:11.752852    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="b9b10c39-50d5-4b81-bc84-afbdbd30c824"
	Oct 06 14:31:25 addons-006450 kubelet[2258]: E1006 14:31:25.752207    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="b9b10c39-50d5-4b81-bc84-afbdbd30c824"
	Oct 06 14:31:37 addons-006450 kubelet[2258]: E1006 14:31:37.752966    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:latest\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="b9b10c39-50d5-4b81-bc84-afbdbd30c824"
	Oct 06 14:31:53 addons-006450 kubelet[2258]: E1006 14:31:53.096737    2258 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 06 14:31:53 addons-006450 kubelet[2258]: E1006 14:31:53.096803    2258 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Oct 06 14:31:53 addons-006450 kubelet[2258]: E1006 14:31:53.096897    2258 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod test-job-nginx-0_my-volcano(b9b10c39-50d5-4b81-bc84-afbdbd30c824): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 06 14:31:53 addons-006450 kubelet[2258]: E1006 14:31:53.096944    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="my-volcano/test-job-nginx-0" podUID="b9b10c39-50d5-4b81-bc84-afbdbd30c824"
	Oct 06 14:31:53 addons-006450 kubelet[2258]: E1006 14:31:53.989021    2258 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 06 14:31:53 addons-006450 kubelet[2258]: E1006 14:31:53.989119    2258 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8521a0d-ed5a-452c-9fe0-94e6798668f2-gcr-creds podName:a8521a0d-ed5a-452c-9fe0-94e6798668f2 nodeName:}" failed. No retries permitted until 2025-10-06 14:33:55.989100254 +0000 UTC m=+747.378307780 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/a8521a0d-ed5a-452c-9fe0-94e6798668f2-gcr-creds") pod "registry-creds-764b6fb674-gxwfl" (UID: "a8521a0d-ed5a-452c-9fe0-94e6798668f2") : secret "registry-creds-gcr" not found
	Oct 06 14:31:54 addons-006450 kubelet[2258]: I1006 14:31:54.752226    2258 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-btgr2" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [59bd3def26ae] <==
	W1006 14:31:34.601127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:36.604815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:36.611782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:38.614984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:38.621878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:40.625689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:40.630261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:42.633086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:42.637884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:44.641696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:44.649045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:46.652307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:46.657565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:48.660897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:48.667996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:50.670895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:50.677635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:52.680512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:52.693394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:54.696454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:54.701411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:56.706725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:56.726147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:58.729853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:31:58.735170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006450 -n addons-006450
helpers_test.go:269: (dbg) Run:  kubectl --context addons-006450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k registry-creds-764b6fb674-gxwfl test-job-nginx-0
helpers_test.go:282: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-006450 describe pod ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k registry-creds-764b6fb674-gxwfl test-job-nginx-0
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-006450 describe pod ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k registry-creds-764b6fb674-gxwfl test-job-nginx-0: exit status 1 (107.259537ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-t2tnf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-s6s8k" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-gxwfl" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-006450 describe pod ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k registry-creds-764b6fb674-gxwfl test-job-nginx-0: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 addons disable volcano --alsologtostderr -v=1: (12.182229381s)
--- FAIL: TestAddons/serial/Volcano (524.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-006450 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-006450 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-006450 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [effafea4-bd61-4243-a42c-72930366d494] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006450 -n addons-006450
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-10-06 14:41:03.026802316 +0000 UTC m=+1241.824671079
addons_test.go:252: (dbg) Run:  kubectl --context addons-006450 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-006450 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-006450/192.168.49.2
Start Time:       Mon, 06 Oct 2025 14:33:02 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.31
IPs:
IP:  10.244.0.31
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6jbnj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6jbnj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m1s                    default-scheduler  Successfully assigned default/nginx to addons-006450
Warning  Failed     6m25s (x3 over 8m)      kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    4m55s (x5 over 8m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     4m54s (x5 over 8m)      kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x2 over 7m20s)   kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2m49s (x22 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m49s (x22 over 7m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-006450 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-006450 logs nginx -n default: exit status 1 (112.766489ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-006450 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-006450
helpers_test.go:243: (dbg) docker inspect addons-006450:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90",
	        "Created": "2025-10-06T14:21:00.2900908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 806507,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:21:00.391293391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/hostname",
	        "HostsPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/hosts",
	        "LogPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90-json.log",
	        "Name": "/addons-006450",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-006450:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-006450",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90",
	                "LowerDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec-init/diff:/var/lib/docker/overlay2/e377610d56c190eb4e6f5af0c002c2b677875f0d15e22ba07535ade05d2c2018/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-006450",
	                "Source": "/var/lib/docker/volumes/addons-006450/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-006450",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-006450",
	                "name.minikube.sigs.k8s.io": "addons-006450",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09ddbf4aed5db91393a32b35522feed3626a6a03e08f6e0448ebb5aad5998ddd",
	            "SandboxKey": "/var/run/docker/netns/09ddbf4aed5d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37510"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37508"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37509"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-006450": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:f4:99:c4:a9:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "165f6e38041442732f4da1d95818020ddb3d0bf16ac6242c03ef818c1b73d7fb",
	                    "EndpointID": "b2523cc159053c0b4c03cccafdf39f8b82bb8b5c7e911427f39eed28857482fc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-006450",
	                        "fedf355814c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-006450 -n addons-006450
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 logs -n 25: (1.189695406s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-379615                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-379615   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ -o=json --download-only -p download-only-023239 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-023239   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-023239                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-023239   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-379615                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-379615   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-023239                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-023239   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ --download-only -p download-docker-403886 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-403886 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ -p download-docker-403886                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-403886 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ --download-only -p binary-mirror-859483 --alsologtostderr --binary-mirror http://127.0.0.1:42473 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-859483   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ -p binary-mirror-859483                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-859483   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ addons  │ enable dashboard -p addons-006450                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-006450                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ start   │ -p addons-006450 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:23 UTC │
	│ addons  │ addons-006450 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:31 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ enable headlamp -p addons-006450 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ ip      │ addons-006450 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:33 UTC │ 06 Oct 25 14:33 UTC │
	│ addons  │ addons-006450 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │ 06 Oct 25 14:39 UTC │
	│ addons  │ addons-006450 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │ 06 Oct 25 14:39 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-006450                                                                                                                                                                                                                                                                                                                                                                                             │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │ 06 Oct 25 14:39 UTC │
	│ addons  │ addons-006450 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │ 06 Oct 25 14:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:20:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:20:33.934280  806109 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:20:33.934452  806109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:33.934482  806109 out.go:374] Setting ErrFile to fd 2...
	I1006 14:20:33.934503  806109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:33.934791  806109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 14:20:33.935342  806109 out.go:368] Setting JSON to false
	I1006 14:20:33.936278  806109 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":75786,"bootTime":1759684648,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1006 14:20:33.936380  806109 start.go:140] virtualization:  
	I1006 14:20:33.939820  806109 out.go:179] * [addons-006450] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 14:20:33.942845  806109 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:20:33.942925  806109 notify.go:220] Checking for updates...
	I1006 14:20:33.949235  806109 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:20:33.952125  806109 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:20:33.955049  806109 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	I1006 14:20:33.957833  806109 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 14:20:33.960596  806109 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:20:33.963595  806109 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:20:33.986303  806109 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 14:20:33.986439  806109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:34.050609  806109 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-06 14:20:34.04143491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:34.050721  806109 docker.go:318] overlay module found
	I1006 14:20:34.053842  806109 out.go:179] * Using the docker driver based on user configuration
	I1006 14:20:34.056712  806109 start.go:304] selected driver: docker
	I1006 14:20:34.056733  806109 start.go:924] validating driver "docker" against <nil>
	I1006 14:20:34.056748  806109 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:20:34.057477  806109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:34.111822  806109 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-06 14:20:34.102783115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:34.111982  806109 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:20:34.112211  806109 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:20:34.115275  806109 out.go:179] * Using Docker driver with root privileges
	I1006 14:20:34.118173  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:20:34.118253  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:20:34.118263  806109 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1006 14:20:34.118342  806109 start.go:348] cluster config:
	{Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1006 14:20:34.121483  806109 out.go:179] * Starting "addons-006450" primary control-plane node in "addons-006450" cluster
	I1006 14:20:34.124347  806109 cache.go:123] Beginning downloading kic base image for docker with docker
	I1006 14:20:34.127249  806109 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:20:34.130100  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:20:34.130168  806109 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1006 14:20:34.130177  806109 cache.go:58] Caching tarball of preloaded images
	I1006 14:20:34.130222  806109 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:20:34.130282  806109 preload.go:233] Found /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1006 14:20:34.130293  806109 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1006 14:20:34.130624  806109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json ...
	I1006 14:20:34.130655  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json: {Name:mk78082a38967c23c9e0fec5499d829d2aa5600d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:20:34.149434  806109 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 14:20:34.149575  806109 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 14:20:34.149597  806109 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1006 14:20:34.149602  806109 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1006 14:20:34.149610  806109 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1006 14:20:34.149626  806109 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1006 14:20:52.383725  806109 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1006 14:20:52.383777  806109 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:20:52.383807  806109 start.go:360] acquireMachinesLock for addons-006450: {Name:mk6a488a7fef2004d8c41401b261288db1a55041 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:20:52.383940  806109 start.go:364] duration metric: took 111.276µs to acquireMachinesLock for "addons-006450"
	I1006 14:20:52.383972  806109 start.go:93] Provisioning new machine with config: &{Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1006 14:20:52.384058  806109 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:20:52.387398  806109 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1006 14:20:52.387686  806109 start.go:159] libmachine.API.Create for "addons-006450" (driver="docker")
	I1006 14:20:52.387754  806109 client.go:168] LocalClient.Create starting
	I1006 14:20:52.387880  806109 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem
	I1006 14:20:52.755986  806109 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem
	I1006 14:20:54.000215  806109 cli_runner.go:164] Run: docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:20:54.021843  806109 cli_runner.go:211] docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:20:54.021935  806109 network_create.go:284] running [docker network inspect addons-006450] to gather additional debugging logs...
	I1006 14:20:54.021951  806109 cli_runner.go:164] Run: docker network inspect addons-006450
	W1006 14:20:54.038245  806109 cli_runner.go:211] docker network inspect addons-006450 returned with exit code 1
	I1006 14:20:54.038287  806109 network_create.go:287] error running [docker network inspect addons-006450]: docker network inspect addons-006450: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-006450 not found
	I1006 14:20:54.038299  806109 network_create.go:289] output of [docker network inspect addons-006450]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-006450 not found
	
	** /stderr **
	I1006 14:20:54.038438  806109 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:20:54.055471  806109 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d4c380}
	I1006 14:20:54.055517  806109 network_create.go:124] attempt to create docker network addons-006450 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:20:54.055572  806109 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-006450 addons-006450
	I1006 14:20:54.110341  806109 network_create.go:108] docker network addons-006450 192.168.49.0/24 created
	I1006 14:20:54.110371  806109 kic.go:121] calculated static IP "192.168.49.2" for the "addons-006450" container
	I1006 14:20:54.110459  806109 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:20:54.127884  806109 cli_runner.go:164] Run: docker volume create addons-006450 --label name.minikube.sigs.k8s.io=addons-006450 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:20:54.148808  806109 oci.go:103] Successfully created a docker volume addons-006450
	I1006 14:20:54.148892  806109 cli_runner.go:164] Run: docker run --rm --name addons-006450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --entrypoint /usr/bin/test -v addons-006450:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:20:56.324467  806109 cli_runner.go:217] Completed: docker run --rm --name addons-006450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --entrypoint /usr/bin/test -v addons-006450:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.175532295s)
	I1006 14:20:56.324511  806109 oci.go:107] Successfully prepared a docker volume addons-006450
	I1006 14:20:56.324545  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:20:56.324566  806109 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:20:56.324627  806109 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-006450:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:21:00.168028  806109 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-006450:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (3.843356071s)
	I1006 14:21:00.168062  806109 kic.go:203] duration metric: took 3.843492791s to extract preloaded images to volume ...
	W1006 14:21:00.168228  806109 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 14:21:00.168353  806109 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:21:00.269120  806109 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-006450 --name addons-006450 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-006450 --network addons-006450 --ip 192.168.49.2 --volume addons-006450:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:21:00.667135  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Running}}
	I1006 14:21:00.686913  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:00.708915  806109 cli_runner.go:164] Run: docker exec addons-006450 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:21:00.766467  806109 oci.go:144] the created container "addons-006450" has a running status.
	I1006 14:21:00.766496  806109 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa...
	I1006 14:21:01.209222  806109 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:21:01.244403  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:01.278442  806109 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:21:01.278462  806109 kic_runner.go:114] Args: [docker exec --privileged addons-006450 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:21:01.342721  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:01.366223  806109 machine.go:93] provisionDockerMachine start ...
	I1006 14:21:01.366312  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.386115  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.388381  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.388404  806109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:21:01.583723  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006450
	
	I1006 14:21:01.583748  806109 ubuntu.go:182] provisioning hostname "addons-006450"
	I1006 14:21:01.583829  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.604321  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.604631  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.604648  806109 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-006450 && echo "addons-006450" | sudo tee /etc/hostname
	I1006 14:21:01.762558  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006450
	
	I1006 14:21:01.762702  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.783081  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.783379  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.783396  806109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-006450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-006450/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-006450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:21:01.932033  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:21:01.932056  806109 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-803497/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-803497/.minikube}
	I1006 14:21:01.932087  806109 ubuntu.go:190] setting up certificates
	I1006 14:21:01.932101  806109 provision.go:84] configureAuth start
	I1006 14:21:01.932162  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:01.953264  806109 provision.go:143] copyHostCerts
	I1006 14:21:01.953391  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/ca.pem (1082 bytes)
	I1006 14:21:01.953509  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/cert.pem (1123 bytes)
	I1006 14:21:01.953572  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/key.pem (1675 bytes)
	I1006 14:21:01.953642  806109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem org=jenkins.addons-006450 san=[127.0.0.1 192.168.49.2 addons-006450 localhost minikube]
	I1006 14:21:02.364998  806109 provision.go:177] copyRemoteCerts
	I1006 14:21:02.365098  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:21:02.365155  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.381521  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:02.475833  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:21:02.494054  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:21:02.512540  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1006 14:21:02.530771  806109 provision.go:87] duration metric: took 598.646522ms to configureAuth
	I1006 14:21:02.530795  806109 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:21:02.531031  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:02.531089  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.548485  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.548797  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.548814  806109 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1006 14:21:02.680553  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1006 14:21:02.680572  806109 ubuntu.go:71] root file system type: overlay
	I1006 14:21:02.680735  806109 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1006 14:21:02.680812  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.697880  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.698189  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.698287  806109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1006 14:21:02.846019  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1006 14:21:02.846167  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.863632  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.864002  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.864029  806109 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1006 14:21:03.799164  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:56:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-06 14:21:02.840466123 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1006 14:21:03.799202  806109 machine.go:96] duration metric: took 2.432959766s to provisionDockerMachine
	I1006 14:21:03.799214  806109 client.go:171] duration metric: took 11.411453149s to LocalClient.Create
	I1006 14:21:03.799235  806109 start.go:167] duration metric: took 11.41157629s to libmachine.API.Create "addons-006450"
	I1006 14:21:03.799246  806109 start.go:293] postStartSetup for "addons-006450" (driver="docker")
	I1006 14:21:03.799257  806109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:21:03.799333  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:21:03.799381  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:03.817018  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:03.911433  806109 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:21:03.914606  806109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:21:03.914683  806109 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:21:03.914699  806109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/addons for local assets ...
	I1006 14:21:03.914767  806109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/files for local assets ...
	I1006 14:21:03.914795  806109 start.go:296] duration metric: took 115.542737ms for postStartSetup
	I1006 14:21:03.915135  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:03.931532  806109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json ...
	I1006 14:21:03.931854  806109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:21:03.931910  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:03.948768  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.041025  806109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:21:04.046229  806109 start.go:128] duration metric: took 11.662156071s to createHost
	I1006 14:21:04.046252  806109 start.go:83] releasing machines lock for "addons-006450", held for 11.662297525s
	I1006 14:21:04.046327  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:04.063754  806109 ssh_runner.go:195] Run: cat /version.json
	I1006 14:21:04.063815  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:04.063893  806109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:21:04.063975  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:04.082777  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.099024  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.268948  806109 ssh_runner.go:195] Run: systemctl --version
	I1006 14:21:04.275561  806109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:21:04.279819  806109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:21:04.279895  806109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:21:04.306291  806109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 14:21:04.306318  806109 start.go:495] detecting cgroup driver to use...
	I1006 14:21:04.306351  806109 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:21:04.306446  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:04.320125  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1006 14:21:04.329116  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1006 14:21:04.338037  806109 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1006 14:21:04.338156  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1006 14:21:04.347404  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:21:04.357144  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1006 14:21:04.366129  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:21:04.374845  806109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:21:04.382821  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1006 14:21:04.391940  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1006 14:21:04.400832  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1006 14:21:04.409604  806109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:21:04.417019  806109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:21:04.424313  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:04.532131  806109 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1006 14:21:04.625905  806109 start.go:495] detecting cgroup driver to use...
	I1006 14:21:04.625977  806109 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:21:04.626053  806109 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1006 14:21:04.640910  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:04.654413  806109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:21:04.685901  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:04.698603  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1006 14:21:04.711790  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:04.725497  806109 ssh_runner.go:195] Run: which cri-dockerd
	I1006 14:21:04.729345  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1006 14:21:04.737737  806109 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1006 14:21:04.751393  806109 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1006 14:21:04.873692  806109 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1006 14:21:04.984971  806109 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1006 14:21:04.985108  806109 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1006 14:21:05.002843  806109 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1006 14:21:05.020602  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:05.142830  806109 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1006 14:21:05.525909  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:21:05.538352  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1006 14:21:05.551902  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:21:05.567756  806109 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1006 14:21:05.691941  806109 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1006 14:21:05.814431  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:05.934017  806109 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1006 14:21:05.949991  806109 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1006 14:21:05.962662  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:06.092789  806109 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1006 14:21:06.164834  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:21:06.178359  806109 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1006 14:21:06.178520  806109 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1006 14:21:06.182231  806109 start.go:563] Will wait 60s for crictl version
	I1006 14:21:06.182343  806109 ssh_runner.go:195] Run: which crictl
	I1006 14:21:06.185820  806109 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:21:06.209958  806109 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1006 14:21:06.210077  806109 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:21:06.232534  806109 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:21:06.261297  806109 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1006 14:21:06.261408  806109 cli_runner.go:164] Run: docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:21:06.277505  806109 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:21:06.281321  806109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:21:06.291363  806109 kubeadm.go:883] updating cluster {Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:21:06.291470  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:21:06.291533  806109 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:21:06.310531  806109 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1006 14:21:06.310560  806109 docker.go:621] Images already preloaded, skipping extraction
	I1006 14:21:06.310627  806109 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:21:06.329469  806109 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1006 14:21:06.329494  806109 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:21:06.329511  806109 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1006 14:21:06.329612  806109 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-006450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:21:06.329683  806109 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1006 14:21:06.383455  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:21:06.383492  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:21:06.383512  806109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:21:06.383538  806109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-006450 NodeName:addons-006450 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:21:06.383695  806109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-006450"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:21:06.383769  806109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:21:06.391605  806109 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:21:06.391780  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:21:06.399572  806109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1006 14:21:06.412296  806109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:21:06.425462  806109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1006 14:21:06.438424  806109 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:21:06.442129  806109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:21:06.452170  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:06.565870  806109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:06.583339  806109 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450 for IP: 192.168.49.2
	I1006 14:21:06.583363  806109 certs.go:195] generating shared ca certs ...
	I1006 14:21:06.583383  806109 certs.go:227] acquiring lock for ca certs: {Name:mk78547ccc35462965e66385811a001935f7f131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.583518  806109 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key
	I1006 14:21:06.758169  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt ...
	I1006 14:21:06.758199  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt: {Name:mke50bad3f8d3d8c6fc7003f3930a8a3fa326b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.758398  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key ...
	I1006 14:21:06.758412  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key: {Name:mk5abe63bfac59b481f1b34a2e6312b79c376290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.758508  806109 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key
	I1006 14:21:07.226648  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt ...
	I1006 14:21:07.226681  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt: {Name:mk35f86863953865131b747e65133218cef7ac69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:07.226896  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key ...
	I1006 14:21:07.226910  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key: {Name:mk32f77223b3be8cca86a275e013030fd8c48071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:07.227011  806109 certs.go:257] generating profile certs ...
	I1006 14:21:07.227078  806109 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key
	I1006 14:21:07.227095  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt with IP's: []
	I1006 14:21:08.232319  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt ...
	I1006 14:21:08.232348  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: {Name:mk237396132558310e9472dccd1a03e68855c562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.232531  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key ...
	I1006 14:21:08.232540  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key: {Name:mkddc2eaac1b60c97f1b0888b122f0d14ff81585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.232614  806109 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa
	I1006 14:21:08.232629  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 14:21:08.361861  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa ...
	I1006 14:21:08.361891  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa: {Name:mk44f5f6071204e4219adaa4cbde67bf1f671150 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.362071  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa ...
	I1006 14:21:08.362085  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa: {Name:mkaddbc6367afe0cdf204382e298fb821349ec34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.362173  806109 certs.go:382] copying /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa -> /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt
	I1006 14:21:08.362251  806109 certs.go:386] copying /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa -> /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key
	I1006 14:21:08.362308  806109 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key
	I1006 14:21:08.362337  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt with IP's: []
	I1006 14:21:09.174420  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt ...
	I1006 14:21:09.174451  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt: {Name:mk6a018d5a25b41127abffe602062c5fb3c9da1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:09.174648  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key ...
	I1006 14:21:09.174662  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key: {Name:mk882903eb03fda7b8a7b7a45601eaab350263dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:09.174869  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:21:09.174912  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:21:09.174936  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:21:09.174963  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem (1675 bytes)
	I1006 14:21:09.175647  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:21:09.195248  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 14:21:09.214696  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:21:09.234148  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 14:21:09.252534  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 14:21:09.270877  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:21:09.289342  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:21:09.307151  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:21:09.325295  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:21:09.343473  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:21:09.356830  806109 ssh_runner.go:195] Run: openssl version
	I1006 14:21:09.363194  806109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:21:09.371688  806109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.375519  806109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 14:21 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.375603  806109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.421333  806109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:21:09.430436  806109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:09.434631  806109 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:21:09.434680  806109 kubeadm.go:400] StartCluster: {Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:09.434811  806109 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1006 14:21:09.456777  806109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:21:09.465021  806109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:21:09.473033  806109 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:21:09.473109  806109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:21:09.480866  806109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:21:09.480886  806109 kubeadm.go:157] found existing configuration files:
	
	I1006 14:21:09.480957  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:21:09.488809  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:21:09.488875  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:21:09.496674  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:21:09.504791  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:21:09.504865  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:21:09.512822  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:21:09.520596  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:21:09.520672  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:21:09.528333  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:21:09.536500  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:21:09.536573  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:21:09.544325  806109 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:21:09.582751  806109 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:21:09.582817  806109 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:21:09.609398  806109 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:21:09.609476  806109 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 14:21:09.609518  806109 kubeadm.go:318] OS: Linux
	I1006 14:21:09.609570  806109 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:21:09.609625  806109 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 14:21:09.609679  806109 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:21:09.609733  806109 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:21:09.609792  806109 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:21:09.609847  806109 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:21:09.609902  806109 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:21:09.609955  806109 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:21:09.610011  806109 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 14:21:09.690823  806109 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:21:09.690944  806109 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:21:09.691059  806109 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:21:09.716052  806109 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:21:09.722414  806109 out.go:252]   - Generating certificates and keys ...
	I1006 14:21:09.722525  806109 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:21:09.722604  806109 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:21:10.515752  806109 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:21:11.397580  806109 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:21:12.455188  806109 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:21:12.900218  806109 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:21:13.333042  806109 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:21:13.333192  806109 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-006450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:21:13.558599  806109 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:21:13.558992  806109 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-006450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:21:14.483025  806109 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:21:15.088755  806109 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:21:15.636700  806109 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:21:15.637033  806109 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:21:16.739302  806109 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:21:17.694897  806109 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:21:18.343756  806109 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:21:18.712603  806109 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:21:19.266809  806109 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:21:19.267485  806109 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:21:19.270758  806109 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:21:19.274504  806109 out.go:252]   - Booting up control plane ...
	I1006 14:21:19.274628  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:21:19.274721  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:21:19.275790  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:21:19.292829  806109 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:21:19.293280  806109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:21:19.301074  806109 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:21:19.301395  806109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:21:19.301643  806109 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:21:19.440373  806109 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:21:19.440504  806109 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:21:20.940044  806109 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501293606s
	I1006 14:21:20.940318  806109 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:21:20.940416  806109 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:21:20.940516  806109 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:21:20.940602  806109 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:21:24.828532  806109 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.887425512s
	I1006 14:21:27.037731  806109 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.097440124s
	I1006 14:21:27.942161  806109 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001481359s
	I1006 14:21:27.961418  806109 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 14:21:27.977744  806109 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 14:21:27.992347  806109 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 14:21:27.992563  806109 kubeadm.go:318] [mark-control-plane] Marking the node addons-006450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 14:21:28.013758  806109 kubeadm.go:318] [bootstrap-token] Using token: e1p0fh.afy23ij81unzzcb1
	I1006 14:21:28.016851  806109 out.go:252]   - Configuring RBAC rules ...
	I1006 14:21:28.016992  806109 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 14:21:28.022251  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 14:21:28.031560  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 14:21:28.036500  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 14:21:28.041064  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 14:21:28.048112  806109 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 14:21:28.349107  806109 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 14:21:28.790402  806109 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 14:21:29.351014  806109 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 14:21:29.352283  806109 kubeadm.go:318] 
	I1006 14:21:29.352364  806109 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 14:21:29.352375  806109 kubeadm.go:318] 
	I1006 14:21:29.352461  806109 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 14:21:29.352472  806109 kubeadm.go:318] 
	I1006 14:21:29.352498  806109 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 14:21:29.352567  806109 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 14:21:29.352625  806109 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 14:21:29.352634  806109 kubeadm.go:318] 
	I1006 14:21:29.352691  806109 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 14:21:29.352700  806109 kubeadm.go:318] 
	I1006 14:21:29.352750  806109 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 14:21:29.352759  806109 kubeadm.go:318] 
	I1006 14:21:29.352815  806109 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 14:21:29.352899  806109 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 14:21:29.352974  806109 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 14:21:29.352983  806109 kubeadm.go:318] 
	I1006 14:21:29.353071  806109 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 14:21:29.353153  806109 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 14:21:29.353161  806109 kubeadm.go:318] 
	I1006 14:21:29.353249  806109 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token e1p0fh.afy23ij81unzzcb1 \
	I1006 14:21:29.353360  806109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:76fb571382ca9706d46d85899e8a2e961f0c518218722f3b163e5bd4963fb9a1 \
	I1006 14:21:29.353397  806109 kubeadm.go:318] 	--control-plane 
	I1006 14:21:29.353406  806109 kubeadm.go:318] 
	I1006 14:21:29.353495  806109 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 14:21:29.353503  806109 kubeadm.go:318] 
	I1006 14:21:29.353588  806109 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token e1p0fh.afy23ij81unzzcb1 \
	I1006 14:21:29.353698  806109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:76fb571382ca9706d46d85899e8a2e961f0c518218722f3b163e5bd4963fb9a1 
	I1006 14:21:29.356907  806109 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 14:21:29.357135  806109 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 14:21:29.357260  806109 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:21:29.357283  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:21:29.357298  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:21:29.360240  806109 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1006 14:21:29.363197  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1006 14:21:29.371108  806109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1006 14:21:29.386109  806109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 14:21:29.386176  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:29.386250  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-006450 minikube.k8s.io/updated_at=2025_10_06T14_21_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=addons-006450 minikube.k8s.io/primary=true
	I1006 14:21:29.530062  806109 ops.go:34] apiserver oom_adj: -16
	I1006 14:21:29.530192  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:30.031190  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:30.530267  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:31.030839  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:31.530611  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:32.030258  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:32.530722  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:33.030864  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:33.530331  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:34.030732  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:34.138751  806109 kubeadm.go:1113] duration metric: took 4.752637843s to wait for elevateKubeSystemPrivileges
	I1006 14:21:34.138779  806109 kubeadm.go:402] duration metric: took 24.704102384s to StartCluster
	I1006 14:21:34.138798  806109 settings.go:142] acquiring lock: {Name:mk86d6d1803b10e0f74b7ca9be175f37419eb162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:34.138932  806109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:21:34.139342  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/kubeconfig: {Name:mkd0e7dce0fefee9d8326b7f5e1280f715df58c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:34.139547  806109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1006 14:21:34.139652  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 14:21:34.139913  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:34.139945  806109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1006 14:21:34.140026  806109 addons.go:69] Setting yakd=true in profile "addons-006450"
	I1006 14:21:34.140047  806109 addons.go:238] Setting addon yakd=true in "addons-006450"
	I1006 14:21:34.140069  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.140558  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.140784  806109 addons.go:69] Setting inspektor-gadget=true in profile "addons-006450"
	I1006 14:21:34.140802  806109 addons.go:238] Setting addon inspektor-gadget=true in "addons-006450"
	I1006 14:21:34.140825  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.141217  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.141581  806109 addons.go:69] Setting metrics-server=true in profile "addons-006450"
	I1006 14:21:34.141646  806109 addons.go:238] Setting addon metrics-server=true in "addons-006450"
	I1006 14:21:34.141685  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.142139  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.143205  806109 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-006450"
	I1006 14:21:34.143238  806109 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-006450"
	I1006 14:21:34.143270  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.143806  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.144933  806109 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-006450"
	I1006 14:21:34.144962  806109 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-006450"
	I1006 14:21:34.144997  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.145499  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.146720  806109 addons.go:69] Setting cloud-spanner=true in profile "addons-006450"
	I1006 14:21:34.146748  806109 addons.go:238] Setting addon cloud-spanner=true in "addons-006450"
	I1006 14:21:34.146777  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.147335  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.156945  806109 addons.go:69] Setting registry=true in profile "addons-006450"
	I1006 14:21:34.157043  806109 addons.go:238] Setting addon registry=true in "addons-006450"
	I1006 14:21:34.157131  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.157718  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.176071  806109 addons.go:69] Setting registry-creds=true in profile "addons-006450"
	I1006 14:21:34.176145  806109 addons.go:238] Setting addon registry-creds=true in "addons-006450"
	I1006 14:21:34.176197  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.176774  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.185281  806109 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-006450"
	I1006 14:21:34.185740  806109 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-006450"
	I1006 14:21:34.185846  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.187060  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.193152  806109 addons.go:69] Setting storage-provisioner=true in profile "addons-006450"
	I1006 14:21:34.193188  806109 addons.go:238] Setting addon storage-provisioner=true in "addons-006450"
	I1006 14:21:34.193224  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.193707  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.207765  806109 addons.go:69] Setting default-storageclass=true in profile "addons-006450"
	I1006 14:21:34.207813  806109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-006450"
	I1006 14:21:34.208233  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.208517  806109 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-006450"
	I1006 14:21:34.208563  806109 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-006450"
	I1006 14:21:34.208903  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.218653  806109 addons.go:69] Setting volcano=true in profile "addons-006450"
	I1006 14:21:34.219019  806109 addons.go:238] Setting addon volcano=true in "addons-006450"
	I1006 14:21:34.219129  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.219730  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.219851  806109 addons.go:69] Setting gcp-auth=true in profile "addons-006450"
	I1006 14:21:34.219900  806109 mustload.go:65] Loading cluster: addons-006450
	I1006 14:21:34.220156  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:34.220463  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.244567  806109 addons.go:69] Setting volumesnapshots=true in profile "addons-006450"
	I1006 14:21:34.244607  806109 addons.go:238] Setting addon volumesnapshots=true in "addons-006450"
	I1006 14:21:34.244648  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.245166  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.256667  806109 out.go:179] * Verifying Kubernetes components...
	I1006 14:21:34.256935  806109 addons.go:69] Setting ingress=true in profile "addons-006450"
	I1006 14:21:34.256960  806109 addons.go:238] Setting addon ingress=true in "addons-006450"
	I1006 14:21:34.257001  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.257557  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.285413  806109 addons.go:69] Setting ingress-dns=true in profile "addons-006450"
	I1006 14:21:34.285459  806109 addons.go:238] Setting addon ingress-dns=true in "addons-006450"
	I1006 14:21:34.285510  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.286061  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.332782  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1006 14:21:34.338069  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1006 14:21:34.338156  806109 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1006 14:21:34.338257  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.357721  806109 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1006 14:21:34.362166  806109 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 14:21:34.362235  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1006 14:21:34.362331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.380568  806109 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1006 14:21:34.383806  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1006 14:21:34.383934  806109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1006 14:21:34.384103  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.384670  806109 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1006 14:21:34.393975  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1006 14:21:34.394079  806109 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1006 14:21:34.394248  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.420035  806109 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1006 14:21:34.423442  806109 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 14:21:34.423541  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1006 14:21:34.423642  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.431543  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:34.457975  806109 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1006 14:21:34.497876  806109 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1006 14:21:34.498037  806109 out.go:179]   - Using image docker.io/registry:3.0.0
	I1006 14:21:34.510678  806109 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1006 14:21:34.519256  806109 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 14:21:34.519362  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1006 14:21:34.519521  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.526420  806109 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1006 14:21:34.526447  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1006 14:21:34.526546  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.528693  806109 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1006 14:21:34.528724  806109 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1006 14:21:34.528812  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.532917  806109 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1006 14:21:34.536266  806109 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1006 14:21:34.537209  806109 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1006 14:21:34.537230  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1006 14:21:34.537331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.542063  806109 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-006450"
	I1006 14:21:34.542107  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.542545  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.581749  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1006 14:21:34.585130  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:34.588025  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:34.590892  806109 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 14:21:34.590917  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1006 14:21:34.591008  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.605945  806109 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 14:21:34.605973  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1006 14:21:34.606041  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.626809  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.628682  806109 addons.go:238] Setting addon default-storageclass=true in "addons-006450"
	I1006 14:21:34.628721  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.629125  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.636774  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1006 14:21:34.640152  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.649003  806109 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1006 14:21:34.649626  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.656019  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1006 14:21:34.658838  806109 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1006 14:21:34.664662  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.676340  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1006 14:21:34.676611  806109 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1006 14:21:34.703838  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.723458  806109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:21:34.726631  806109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:34.726657  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:21:34.726743  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.752688  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1006 14:21:34.756756  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1006 14:21:34.760053  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1006 14:21:34.763938  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1006 14:21:34.769389  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1006 14:21:34.772287  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1006 14:21:34.772317  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1006 14:21:34.772394  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.772747  806109 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1006 14:21:34.772787  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1006 14:21:34.772862  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.804304  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.808420  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.822462  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.823147  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.867044  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.870362  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.874341  806109 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1006 14:21:34.876981  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.878063  806109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:34.878079  806109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:21:34.878140  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.888089  806109 out.go:179]   - Using image docker.io/busybox:stable
	I1006 14:21:34.891239  806109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 14:21:34.891265  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1006 14:21:34.891331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.920306  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.945324  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 14:21:34.947994  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	W1006 14:21:34.970150  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:34.970251  806109 retry.go:31] will retry after 147.40402ms: ssh: handshake failed: EOF
	W1006 14:21:34.972537  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:34.972566  806109 retry.go:31] will retry after 281.687683ms: ssh: handshake failed: EOF
	I1006 14:21:34.975793  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:35.005444  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	W1006 14:21:35.009771  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.009812  806109 retry.go:31] will retry after 207.774831ms: ssh: handshake failed: EOF
	I1006 14:21:35.012483  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:35.127149  806109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1006 14:21:35.219409  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.219491  806109 retry.go:31] will retry after 414.252414ms: ssh: handshake failed: EOF
	W1006 14:21:35.255517  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.255595  806109 retry.go:31] will retry after 378.429324ms: ssh: handshake failed: EOF
	I1006 14:21:35.851743  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 14:21:35.853206  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 14:21:35.989160  806109 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:35.989181  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1006 14:21:36.111352  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1006 14:21:36.151070  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1006 14:21:36.151165  806109 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1006 14:21:36.192781  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1006 14:21:36.192855  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1006 14:21:36.226627  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1006 14:21:36.226690  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1006 14:21:36.243375  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 14:21:36.255630  806109 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1006 14:21:36.255746  806109 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1006 14:21:36.350477  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1006 14:21:36.350562  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1006 14:21:36.377661  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 14:21:36.396057  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:36.399305  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:36.426714  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1006 14:21:36.426796  806109 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1006 14:21:36.427640  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:36.435627  806109 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1006 14:21:36.435647  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1006 14:21:36.443471  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 14:21:36.479083  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 14:21:36.481831  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1006 14:21:36.481904  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1006 14:21:36.527849  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1006 14:21:36.527927  806109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1006 14:21:36.537515  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1006 14:21:36.537591  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1006 14:21:36.597935  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1006 14:21:36.598000  806109 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1006 14:21:36.601149  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1006 14:21:36.790553  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 14:21:36.790647  806109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1006 14:21:36.821053  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1006 14:21:36.821135  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1006 14:21:36.867220  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1006 14:21:36.871426  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1006 14:21:36.871504  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1006 14:21:36.880338  806109 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.753102328s)
	I1006 14:21:36.880515  806109 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.935150087s)
	I1006 14:21:36.880679  806109 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1006 14:21:36.881380  806109 node_ready.go:35] waiting up to 6m0s for node "addons-006450" to be "Ready" ...
	I1006 14:21:36.887470  806109 node_ready.go:49] node "addons-006450" is "Ready"
	I1006 14:21:36.887509  806109 node_ready.go:38] duration metric: took 6.110221ms for node "addons-006450" to be "Ready" ...
	I1006 14:21:36.887526  806109 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:21:36.887614  806109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:21:36.891551  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 14:21:37.041224  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1006 14:21:37.041263  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1006 14:21:37.185540  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1006 14:21:37.185582  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1006 14:21:37.245756  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1006 14:21:37.245794  806109 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1006 14:21:37.320678  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1006 14:21:37.384934  806109 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-006450" context rescaled to 1 replicas
	I1006 14:21:37.439254  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1006 14:21:37.439280  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1006 14:21:37.491833  806109 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:37.491853  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1006 14:21:37.710140  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.858315722s)
	I1006 14:21:37.710258  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.856978431s)
	I1006 14:21:37.797019  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1006 14:21:37.797087  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1006 14:21:38.055462  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.944020191s)
	I1006 14:21:38.066071  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:38.209415  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1006 14:21:38.209495  806109 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1006 14:21:38.308015  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1006 14:21:38.308047  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1006 14:21:38.731766  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1006 14:21:38.731811  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1006 14:21:38.884673  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 14:21:38.884702  806109 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1006 14:21:39.201324  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 14:21:42.056707  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1006 14:21:42.056850  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:42.096992  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:43.527695  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.284260443s)
	I1006 14:21:43.527736  806109 addons.go:479] Verifying addon ingress=true in "addons-006450"
	I1006 14:21:43.527908  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.150170305s)
	I1006 14:21:43.528008  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.131874449s)
	W1006 14:21:43.528029  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:43.528050  806109 retry.go:31] will retry after 227.873764ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:43.528137  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.128758076s)
	I1006 14:21:43.528185  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.100376481s)
	I1006 14:21:43.528469  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.084972148s)
	I1006 14:21:43.528566  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.04940419s)
	I1006 14:21:43.528706  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.927477657s)
	I1006 14:21:43.528726  806109 addons.go:479] Verifying addon registry=true in "addons-006450"
	I1006 14:21:43.532546  806109 out.go:179] * Verifying ingress addon...
	I1006 14:21:43.534069  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1006 14:21:43.534935  806109 out.go:179] * Verifying registry addon...
	I1006 14:21:43.537759  806109 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1006 14:21:43.540886  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1006 14:21:43.565742  806109 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1006 14:21:43.565781  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:43.568676  806109 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 14:21:43.568708  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1006 14:21:43.576208  806109 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1006 14:21:43.749034  806109 addons.go:238] Setting addon gcp-auth=true in "addons-006450"
	I1006 14:21:43.749121  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:43.749685  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:43.756132  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:43.787457  806109 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1006 14:21:43.787548  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:43.815805  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:44.114671  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:44.115253  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:44.548438  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:44.550543  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:45.046803  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:45.049237  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:45.581293  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:45.581847  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.153351  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:46.153798  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.640887  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.643861  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.081245  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.081634  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:47.568674  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.569175  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.056720  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:48.057131  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.585162  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.717857623s)
	I1006 14:21:48.585271  806109 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (11.697643759s)
	I1006 14:21:48.585318  806109 api_server.go:72] duration metric: took 14.445740723s to wait for apiserver process to appear ...
	I1006 14:21:48.585343  806109 api_server.go:88] waiting for apiserver healthz status ...
	I1006 14:21:48.585375  806109 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1006 14:21:48.585803  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.694205832s)
	I1006 14:21:48.585856  806109 addons.go:479] Verifying addon metrics-server=true in "addons-006450"
	I1006 14:21:48.585929  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.265223311s)
	I1006 14:21:48.586329  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.520142743s)
	W1006 14:21:48.586371  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 14:21:48.586391  806109 retry.go:31] will retry after 354.82385ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 14:21:48.586570  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.385202699s)
	I1006 14:21:48.586585  806109 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-006450"
	I1006 14:21:48.590422  806109 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-006450 service yakd-dashboard -n yakd-dashboard
	
	I1006 14:21:48.592576  806109 out.go:179] * Verifying csi-hostpath-driver addon...
	I1006 14:21:48.597670  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1006 14:21:48.614206  806109 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1006 14:21:48.647358  806109 api_server.go:141] control plane version: v1.34.1
	I1006 14:21:48.647389  806109 api_server.go:131] duration metric: took 62.022744ms to wait for apiserver health ...
	I1006 14:21:48.647399  806109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 14:21:48.648507  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:48.648899  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.690542  806109 system_pods.go:59] 19 kube-system pods found
	I1006 14:21:48.690881  806109 system_pods.go:61] "coredns-66bc5c9577-5b26c" [b2fadab4-223c-4127-ae78-2734411d72b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:21:48.690920  806109 system_pods.go:61] "coredns-66bc5c9577-z6nm4" [7fc2de03-9a40-4426-8af4-1216ed30bad3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1006 14:21:48.690960  806109 system_pods.go:61] "csi-hostpath-attacher-0" [f5fb1d05-3f2a-4b8a-b2ed-df5688d53301] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 14:21:48.690990  806109 system_pods.go:61] "csi-hostpath-resizer-0" [03b524e2-88a1-4c1c-9014-8b60efd178c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 14:21:48.691016  806109 system_pods.go:61] "csi-hostpathplugin-jdxpx" [dee0a0f1-55fc-4b8c-8e11-deef46bcb09b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 14:21:48.691053  806109 system_pods.go:61] "etcd-addons-006450" [68d8971d-a245-46ed-aeea-b6c95eaaa5a1] Running
	I1006 14:21:48.691073  806109 system_pods.go:61] "kube-apiserver-addons-006450" [859fa9a9-9411-46dc-a7a4-6f90f229bcb7] Running
	I1006 14:21:48.691092  806109 system_pods.go:61] "kube-controller-manager-addons-006450" [de781030-92f3-4acc-81f4-6ea4d01e03a7] Running
	I1006 14:21:48.691138  806109 system_pods.go:61] "kube-ingress-dns-minikube" [ed71a121-1938-4fcd-98ba-91506484a2ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 14:21:48.691163  806109 system_pods.go:61] "kube-proxy-rr8rw" [081a658a-cae9-4fff-a7ca-ec779b247fb7] Running
	I1006 14:21:48.691184  806109 system_pods.go:61] "kube-scheduler-addons-006450" [74dbd2fb-a5c2-463a-b49f-0d6b7ab88301] Running
	I1006 14:21:48.691218  806109 system_pods.go:61] "metrics-server-85b7d694d7-s77t8" [4560d030-72e2-4fed-b2fb-5a3edfe4178c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 14:21:48.691244  806109 system_pods.go:61] "nvidia-device-plugin-daemonset-d29s2" [0c163bb6-be86-4968-b8c1-96839618f3ac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 14:21:48.691266  806109 system_pods.go:61] "registry-66898fdd98-btgr2" [90bfa3d6-9f89-4227-b3ef-d98d9fadd197] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 14:21:48.691302  806109 system_pods.go:61] "registry-creds-764b6fb674-gxwfl" [a8521a0d-ed5a-452c-9fe0-94e6798668f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 14:21:48.691330  806109 system_pods.go:61] "registry-proxy-wd7b6" [638a84e3-6fae-4413-aa77-31014a85ff29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 14:21:48.691354  806109 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6bdv2" [4cd0ea0b-af7f-46f8-bd9b-8082dfd0fba4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.691391  806109 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8stqh" [d3201aa7-7b51-4180-abc6-274d440ee6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.691417  806109 system_pods.go:61] "storage-provisioner" [8e59991a-c6eb-407e-bacd-d535ad3d89b9] Running
	I1006 14:21:48.691437  806109 system_pods.go:74] duration metric: took 44.032107ms to wait for pod list to return data ...
	I1006 14:21:48.691473  806109 default_sa.go:34] waiting for default service account to be created ...
	I1006 14:21:48.690844  806109 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 14:21:48.691711  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:48.780129  806109 default_sa.go:45] found service account: "default"
	I1006 14:21:48.780207  806109 default_sa.go:55] duration metric: took 88.709889ms for default service account to be created ...
	I1006 14:21:48.780231  806109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 14:21:48.888790  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.132593822s)
	W1006 14:21:48.888876  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:48.888908  806109 retry.go:31] will retry after 467.080472ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:48.888970  806109 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.101487907s)
	I1006 14:21:48.892596  806109 system_pods.go:86] 19 kube-system pods found
	I1006 14:21:48.892682  806109 system_pods.go:89] "coredns-66bc5c9577-5b26c" [b2fadab4-223c-4127-ae78-2734411d72b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:21:48.892707  806109 system_pods.go:89] "coredns-66bc5c9577-z6nm4" [7fc2de03-9a40-4426-8af4-1216ed30bad3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1006 14:21:48.892729  806109 system_pods.go:89] "csi-hostpath-attacher-0" [f5fb1d05-3f2a-4b8a-b2ed-df5688d53301] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 14:21:48.892769  806109 system_pods.go:89] "csi-hostpath-resizer-0" [03b524e2-88a1-4c1c-9014-8b60efd178c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 14:21:48.892792  806109 system_pods.go:89] "csi-hostpathplugin-jdxpx" [dee0a0f1-55fc-4b8c-8e11-deef46bcb09b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 14:21:48.892812  806109 system_pods.go:89] "etcd-addons-006450" [68d8971d-a245-46ed-aeea-b6c95eaaa5a1] Running
	I1006 14:21:48.892844  806109 system_pods.go:89] "kube-apiserver-addons-006450" [859fa9a9-9411-46dc-a7a4-6f90f229bcb7] Running
	I1006 14:21:48.892868  806109 system_pods.go:89] "kube-controller-manager-addons-006450" [de781030-92f3-4acc-81f4-6ea4d01e03a7] Running
	I1006 14:21:48.892892  806109 system_pods.go:89] "kube-ingress-dns-minikube" [ed71a121-1938-4fcd-98ba-91506484a2ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 14:21:48.892925  806109 system_pods.go:89] "kube-proxy-rr8rw" [081a658a-cae9-4fff-a7ca-ec779b247fb7] Running
	I1006 14:21:48.892962  806109 system_pods.go:89] "kube-scheduler-addons-006450" [74dbd2fb-a5c2-463a-b49f-0d6b7ab88301] Running
	I1006 14:21:48.892984  806109 system_pods.go:89] "metrics-server-85b7d694d7-s77t8" [4560d030-72e2-4fed-b2fb-5a3edfe4178c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 14:21:48.893021  806109 system_pods.go:89] "nvidia-device-plugin-daemonset-d29s2" [0c163bb6-be86-4968-b8c1-96839618f3ac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 14:21:48.893045  806109 system_pods.go:89] "registry-66898fdd98-btgr2" [90bfa3d6-9f89-4227-b3ef-d98d9fadd197] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 14:21:48.893080  806109 system_pods.go:89] "registry-creds-764b6fb674-gxwfl" [a8521a0d-ed5a-452c-9fe0-94e6798668f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 14:21:48.893105  806109 system_pods.go:89] "registry-proxy-wd7b6" [638a84e3-6fae-4413-aa77-31014a85ff29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 14:21:48.893126  806109 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6bdv2" [4cd0ea0b-af7f-46f8-bd9b-8082dfd0fba4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.893161  806109 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8stqh" [d3201aa7-7b51-4180-abc6-274d440ee6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.893183  806109 system_pods.go:89] "storage-provisioner" [8e59991a-c6eb-407e-bacd-d535ad3d89b9] Running
	I1006 14:21:48.893204  806109 system_pods.go:126] duration metric: took 112.954104ms to wait for k8s-apps to be running ...
	I1006 14:21:48.893238  806109 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 14:21:48.893331  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:21:48.893436  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:48.897290  806109 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1006 14:21:48.900672  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1006 14:21:48.900752  806109 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1006 14:21:48.942085  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:48.960118  806109 system_svc.go:56] duration metric: took 66.871905ms WaitForService to wait for kubelet
	I1006 14:21:48.960199  806109 kubeadm.go:586] duration metric: took 14.820620987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:21:48.960231  806109 node_conditions.go:102] verifying NodePressure condition ...
	I1006 14:21:48.965554  806109 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 14:21:48.965640  806109 node_conditions.go:123] node cpu capacity is 2
	I1006 14:21:48.965667  806109 node_conditions.go:105] duration metric: took 5.41607ms to run NodePressure ...
	I1006 14:21:48.965693  806109 start.go:241] waiting for startup goroutines ...
	I1006 14:21:48.984429  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1006 14:21:48.984493  806109 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1006 14:21:49.062891  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:49.063409  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:49.102274  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:49.109468  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 14:21:49.109495  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1006 14:21:49.163209  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 14:21:49.357126  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:49.543241  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:49.545480  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:49.602876  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:50.041860  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:50.044347  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:50.102201  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:50.541424  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:50.543788  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:50.625651  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.006456  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.064277984s)
	I1006 14:21:51.006543  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.84331281s)
	I1006 14:21:51.010142  806109 addons.go:479] Verifying addon gcp-auth=true in "addons-006450"
	I1006 14:21:51.025044  806109 out.go:179] * Verifying gcp-auth addon...
	I1006 14:21:51.032841  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1006 14:21:51.036529  806109 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1006 14:21:51.036555  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:51.042265  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:51.044526  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:51.102619  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.536647  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:51.544904  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:51.545440  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:51.602200  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.864284  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.507114739s)
	W1006 14:21:51.864377  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.864433  806109 retry.go:31] will retry after 615.286821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.037094  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:52.041054  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:52.043625  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:52.101572  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:52.479941  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:52.536478  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:52.541425  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:52.543774  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:52.600990  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:53.035872  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:53.041098  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:53.043636  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:53.101845  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:53.536239  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:53.536598  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.05658149s)
	W1006 14:21:53.536657  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.536695  806109 retry.go:31] will retry after 1.187113289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.541601  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:53.543552  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:53.602095  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.037487  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:54.042200  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:54.045343  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:54.102498  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.537542  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:54.542167  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:54.544351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:54.602290  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.724667  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:55.036372  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:55.043120  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:55.044769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:55.101792  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:55.536221  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:55.541111  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:55.543457  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:55.601561  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:55.840769  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.116063398s)
	W1006 14:21:55.840813  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:55.840833  806109 retry.go:31] will retry after 947.610718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.036387  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:56.043063  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:56.044685  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:56.101635  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:56.536456  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:56.541501  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:56.543585  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:56.601983  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:56.789245  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:57.036659  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:57.042057  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:57.044676  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:57.102243  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:57.537164  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:57.543103  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:57.544004  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:57.601850  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:57.839191  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.049904578s)
	W1006 14:21:57.839238  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:57.839258  806109 retry.go:31] will retry after 1.03292313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.037616  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:58.041961  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:58.044496  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:58.107912  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:58.536745  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:58.540665  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:58.544634  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:58.601133  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:58.872574  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:59.036224  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:59.041408  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:59.044098  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:59.101370  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:59.536626  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:59.542541  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:59.543654  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:59.601836  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:59.922791  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.050177986s)
	W1006 14:21:59.922823  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:59.922842  806109 retry.go:31] will retry after 2.488598562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:00.043764  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:00.064604  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:00.065064  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:00.129394  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:00.537107  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:00.541010  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:00.543818  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:00.628309  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:01.036861  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:01.043610  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:01.046494  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:01.102249  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:01.537399  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:01.541534  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:01.543844  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:01.601153  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:02.038594  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:02.041768  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:02.044895  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:02.102517  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:02.411855  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:02.535770  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:02.540865  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:02.544524  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:02.601881  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.036514  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:03.041497  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:03.043732  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:03.101053  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.551361  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:03.551723  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:03.552096  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:03.607741  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.821574  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.409680153s)
	W1006 14:22:03.821607  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:03.821626  806109 retry.go:31] will retry after 2.808613429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:04.036608  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:04.042059  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:04.044591  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:04.102238  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:04.537121  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:04.541031  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:04.544043  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:04.638355  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:05.045826  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:05.045915  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:05.046027  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:05.103126  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:05.536935  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:05.541096  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:05.543811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:05.601370  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.037342  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:06.048770  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:06.049575  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:06.102090  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.537158  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:06.541167  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:06.544718  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:06.601939  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.631301  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:07.036903  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:07.041275  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:07.046171  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:07.101990  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:07.537306  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:07.542954  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:07.548030  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:07.602151  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:08.038923  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:08.045713  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:08.048165  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:08.138614  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:08.453750  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.822414187s)
	W1006 14:22:08.453835  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.453869  806109 retry.go:31] will retry after 8.425837281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.536134  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:08.541309  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:08.543203  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:08.601173  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:09.037059  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:09.041277  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:09.043958  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:09.106411  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:09.536191  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:09.540957  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:09.543212  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:09.637335  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:10.038746  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:10.041203  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:10.043968  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:10.101414  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:10.535919  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:10.541593  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:10.544180  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:10.601144  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:11.036181  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:11.041258  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:11.043931  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:11.102062  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:11.536161  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:11.541576  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:11.545106  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:11.601994  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:12.037286  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:12.041743  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:12.043857  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:12.101936  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:12.536252  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:12.542977  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:12.544737  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:12.602418  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:13.037636  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:13.043353  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:13.045541  806109 kapi.go:107] duration metric: took 29.504656348s to wait for kubernetes.io/minikube-addons=registry ...
	I1006 14:22:13.103856  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:13.536010  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:13.541542  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:13.602453  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:14.036799  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:14.041118  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:14.101847  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:14.535955  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:14.540895  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:14.601210  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:15.038047  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:15.042436  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:15.101780  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:15.536551  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:15.541754  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:15.601384  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.036266  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:16.041349  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:16.101883  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.535728  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:16.540993  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:16.601091  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.880118  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:17.036213  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:17.041368  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:17.102032  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:17.536149  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:17.541821  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:17.606226  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:18.037103  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:18.041146  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:18.102447  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:18.125066  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.244891148s)
	W1006 14:22:18.125106  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:18.125137  806109 retry.go:31] will retry after 8.394227584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:18.536459  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:18.541489  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:18.602140  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:19.036341  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:19.041843  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:19.101573  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:19.536129  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:19.541594  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:19.603991  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:20.036705  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:20.040761  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:20.101466  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:20.536346  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:20.541417  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:20.602109  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:21.037009  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:21.042008  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:21.103192  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:21.536872  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:21.545192  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:21.603991  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:22.036447  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:22.041450  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:22.101387  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:22.537530  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:22.547087  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:22.602381  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:23.038711  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:23.047024  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:23.102246  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:23.537465  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:23.542053  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:23.602575  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:24.037716  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:24.041932  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:24.105425  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:24.537009  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:24.540996  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:24.601164  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:25.037218  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:25.041462  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:25.101898  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:25.541274  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:25.541617  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:25.601533  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:26.037202  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:26.041027  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:26.101243  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:26.520530  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:26.537318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:26.541434  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:26.602288  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.036799  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:27.040735  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:27.101318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.536660  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:27.540656  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:27.601312  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.622677  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.102107139s)
	W1006 14:22:27.622764  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.622799  806109 retry.go:31] will retry after 8.964562377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:28.036352  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:28.041655  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:28.101317  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:28.536873  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:28.542495  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:28.601848  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:29.037235  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:29.041321  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:29.101529  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:29.536608  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:29.541988  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:29.601332  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:30.067966  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:30.069628  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:30.102287  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:30.537456  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:30.541607  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:30.605527  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:31.047144  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:31.047366  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:31.102811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:31.540586  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:31.543600  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:31.601318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:32.037351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:32.041560  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:32.101712  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:32.537074  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:32.541459  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:32.637575  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:33.037645  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:33.041762  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:33.101769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:33.537080  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:33.546252  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:33.602460  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:34.049083  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:34.059194  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:34.102644  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:34.536345  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:34.541231  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:34.602566  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:35.036474  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:35.041683  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:35.101153  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:35.536516  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:35.543131  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:35.601301  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:36.040029  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:36.041789  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:36.101554  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:36.536713  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:36.541523  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:36.587821  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:36.637573  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:37.036522  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:37.042208  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:37.101356  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:37.538450  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:37.541912  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:37.601423  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:38.039073  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:38.041963  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:38.107975  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:38.260560  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.672700487s)
	W1006 14:22:38.260650  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:38.260684  806109 retry.go:31] will retry after 28.502029632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:38.537841  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:38.541302  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:38.634080  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:39.042819  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:39.044710  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:39.101819  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:39.536317  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:39.541291  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:39.602171  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:40.063837  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:40.065152  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:40.160263  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:40.536517  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:40.541760  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:40.601589  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:41.035811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:41.040992  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:41.101764  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:41.537386  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:41.541696  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:41.638626  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:42.041509  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:42.042425  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:42.102420  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:42.536866  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:42.540382  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:42.602008  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:43.036485  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:43.041855  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:43.104569  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:43.537538  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:43.541564  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:43.603912  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:44.036751  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:44.041644  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:44.100816  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:44.535598  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:44.540901  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:44.605465  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:45.067085  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:45.085831  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:45.104001  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:45.535733  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:45.541994  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:45.601937  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:46.037039  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:46.042662  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:46.100769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:46.538350  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:46.542984  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:46.601745  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:47.036231  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:47.041572  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:47.101597  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:47.537411  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:47.541447  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:47.601925  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:48.036062  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:48.046387  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:48.106511  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:48.535973  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:48.541411  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:48.602406  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:49.082967  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:49.083089  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:49.101404  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:49.543349  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:49.543936  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:49.606022  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:50.052841  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:50.053282  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:50.101918  806109 kapi.go:107] duration metric: took 1m1.504246684s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1006 14:22:50.536780  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:50.540713  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:51.039833  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:51.041873  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:51.536470  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:51.541280  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:52.036677  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:52.041641  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:52.536085  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:52.540908  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:53.036694  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:53.041925  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:53.536756  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:53.541339  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:54.036706  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:54.041617  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:54.536485  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:54.541468  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:55.054778  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:55.076569  806109 kapi.go:107] duration metric: took 1m11.538807076s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1006 14:22:55.536329  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:56.036624  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:56.535976  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:57.036354  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:57.536109  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:58.037892  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:58.536442  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:59.037351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:59.536233  806109 kapi.go:107] duration metric: took 1m8.503389262s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1006 14:22:59.539324  806109 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-006450 cluster.
	I1006 14:22:59.542088  806109 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1006 14:22:59.544863  806109 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1006 14:23:06.763823  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 14:23:07.625986  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:23:07.626019  806109 retry.go:31] will retry after 17.722294339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:23:25.349291  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 14:23:26.187865  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:26.187971  806109 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:26.191145  806109 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, volcano, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1006 14:23:26.193747  806109 addons.go:514] duration metric: took 1m52.052915825s for enable addons: enabled=[registry-creds amd-gpu-device-plugin cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher volcano metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1006 14:23:26.193810  806109 start.go:246] waiting for cluster config update ...
	I1006 14:23:26.193839  806109 start.go:255] writing updated cluster config ...
	I1006 14:23:26.194174  806109 ssh_runner.go:195] Run: rm -f paused
	I1006 14:23:26.198700  806109 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:23:26.203281  806109 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5b26c" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.213859  806109 pod_ready.go:94] pod "coredns-66bc5c9577-5b26c" is "Ready"
	I1006 14:23:26.213893  806109 pod_ready.go:86] duration metric: took 10.577014ms for pod "coredns-66bc5c9577-5b26c" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.216571  806109 pod_ready.go:83] waiting for pod "etcd-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.223509  806109 pod_ready.go:94] pod "etcd-addons-006450" is "Ready"
	I1006 14:23:26.223539  806109 pod_ready.go:86] duration metric: took 6.938313ms for pod "etcd-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.226276  806109 pod_ready.go:83] waiting for pod "kube-apiserver-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.230877  806109 pod_ready.go:94] pod "kube-apiserver-addons-006450" is "Ready"
	I1006 14:23:26.230912  806109 pod_ready.go:86] duration metric: took 4.607653ms for pod "kube-apiserver-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.233246  806109 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.603009  806109 pod_ready.go:94] pod "kube-controller-manager-addons-006450" is "Ready"
	I1006 14:23:26.603041  806109 pod_ready.go:86] duration metric: took 369.767385ms for pod "kube-controller-manager-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.803580  806109 pod_ready.go:83] waiting for pod "kube-proxy-rr8rw" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.202844  806109 pod_ready.go:94] pod "kube-proxy-rr8rw" is "Ready"
	I1006 14:23:27.202872  806109 pod_ready.go:86] duration metric: took 399.265658ms for pod "kube-proxy-rr8rw" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.402987  806109 pod_ready.go:83] waiting for pod "kube-scheduler-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.803050  806109 pod_ready.go:94] pod "kube-scheduler-addons-006450" is "Ready"
	I1006 14:23:27.803077  806109 pod_ready.go:86] duration metric: took 400.059334ms for pod "kube-scheduler-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.803090  806109 pod_ready.go:40] duration metric: took 1.604355795s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:23:27.868687  806109 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 14:23:27.871326  806109 out.go:179] * Done! kubectl is now configured to use "addons-006450" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 06 14:39:13 addons-006450 dockerd[1123]: time="2025-10-06T14:39:13.106784424Z" level=info msg="ignoring event" container=2e1fd961dc8a747055f5ce2fbde8e3ec630f8274d925859af7db38de4ff0df90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:13 addons-006450 dockerd[1123]: time="2025-10-06T14:39:13.109138619Z" level=info msg="ignoring event" container=05cbc48ffb51ee4a7395284ee2704c32b83756093ae36a56c945b2a5d5a4eb0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:13 addons-006450 dockerd[1123]: time="2025-10-06T14:39:13.400711348Z" level=info msg="ignoring event" container=47610a948360b5cdc052f77a137292da2a869e2a2849d63c72de76326a70e4a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:13 addons-006450 dockerd[1123]: time="2025-10-06T14:39:13.416629968Z" level=info msg="ignoring event" container=4dcf8198ace6514ac2e22c85569cdaee39ed3dff3a4144796ed8a4ea3ff20599 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:14 addons-006450 dockerd[1123]: time="2025-10-06T14:39:14.194320874Z" level=info msg="ignoring event" container=9e27aa581454c13f46d59585efaa29660607956d14403c6e6129b41b839344f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:14 addons-006450 dockerd[1123]: time="2025-10-06T14:39:14.315946568Z" level=info msg="ignoring event" container=4876f3a9c229acf321f36d002a4175bef0de0ff9ef884693dc5a66259b105c84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:14 addons-006450 dockerd[1123]: time="2025-10-06T14:39:14.339838295Z" level=info msg="ignoring event" container=0201aae6c64e0b0fe80f298a253bb5edd7f53a1ca441a612913e54e79c5e5e3f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:14 addons-006450 dockerd[1123]: time="2025-10-06T14:39:14.347190565Z" level=info msg="ignoring event" container=6f25a4d6caf644628eef2f0a3b8c514ee34061322f53701cd3a9db985f5d0784 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:14 addons-006450 dockerd[1123]: time="2025-10-06T14:39:14.369062762Z" level=info msg="ignoring event" container=7f7bdac7cf59b95abdf02c9a524e31256018a00727c139d2c4fa663c24ec1e7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:14 addons-006450 dockerd[1123]: time="2025-10-06T14:39:14.369126186Z" level=info msg="ignoring event" container=9c0d6f72f1f92060af86f30f5d10f614300100e7de90c351cf295e2a77a86d83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:14 addons-006450 dockerd[1123]: time="2025-10-06T14:39:14.380911594Z" level=info msg="ignoring event" container=4eeec494d7d9f2bd711089a92e1dd86762e13291f627e4dd276370765dceb6c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:14 addons-006450 dockerd[1123]: time="2025-10-06T14:39:14.387945839Z" level=info msg="ignoring event" container=e400809cac56914639f8f386c32220365350a8cb9316fc9a9e4c6acc116fd557 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:14 addons-006450 dockerd[1123]: time="2025-10-06T14:39:14.565120936Z" level=info msg="ignoring event" container=1a820fa8b56fd25884bf8d6ef8772b4a09f97928f06cbb7eaa9a08e060adde0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:14 addons-006450 dockerd[1123]: time="2025-10-06T14:39:14.688021551Z" level=info msg="ignoring event" container=0ca9bd27ecd5a5e4d62d325a7b439d00caf925d98ee6f80fb7981f9c75a83d05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:14 addons-006450 dockerd[1123]: time="2025-10-06T14:39:14.816896632Z" level=info msg="ignoring event" container=70205a118c52faea85513013fdbfa4fc84856b9806d634e120e523ae7e913ae4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:39:21 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:39:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc69e520bf4c4bf47b79f08b23dbd9af85387fe1d8d8e968dd1fab91309ae954/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:39:21 addons-006450 dockerd[1123]: time="2025-10-06T14:39:21.792598195Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:39:21 addons-006450 dockerd[1123]: time="2025-10-06T14:39:21.875507648Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:39:33 addons-006450 dockerd[1123]: time="2025-10-06T14:39:33.808056635Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:39:33 addons-006450 dockerd[1123]: time="2025-10-06T14:39:33.980637052Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:39:33 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:39:33Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Oct 06 14:39:55 addons-006450 dockerd[1123]: time="2025-10-06T14:39:55.801921737Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:39:55 addons-006450 dockerd[1123]: time="2025-10-06T14:39:55.885450499Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:40:43 addons-006450 dockerd[1123]: time="2025-10-06T14:40:43.801318944Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:40:43 addons-006450 dockerd[1123]: time="2025-10-06T14:40:43.882844106Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD                                         NAMESPACE
	ffe6a9017df48       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                    0                   311174277f416       busybox                                     default
	f2a47081481dc       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             18 minutes ago      Running             controller                 0                   bd5557adaf3c6       ingress-nginx-controller-675c5ddd98-k4m4k   ingress-nginx
	02001c5bf8ca9       9a80c0c8eb61c                                                                                                                18 minutes ago      Exited              patch                      1                   67b15011fa29d       ingress-nginx-admission-patch-s6s8k         ingress-nginx
	11587ae8b0259       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   18 minutes ago      Exited              create                     0                   6d0aa0c7acb77       ingress-nginx-admission-create-t2tnf        ingress-nginx
	509e7623ba228       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            18 minutes ago      Running             gadget                     0                   14032f9fa6ab7       gadget-mwfpm                                gadget
	d3025a0e45236       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        18 minutes ago      Running             yakd                       0                   510624cc4af1e       yakd-dashboard-5ff678cb9-nfj9q              yakd-dashboard
	0244185030bd7       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                         18 minutes ago      Running             minikube-ingress-dns       0                   5fb11b5433718       kube-ingress-dns-minikube                   kube-system
	7c848b41913dc       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       18 minutes ago      Running             local-path-provisioner     0                   dd0d4f86343b0       local-path-provisioner-648f6765c9-fmrx9     local-path-storage
	8cf5351cc4642       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58               19 minutes ago      Running             cloud-spanner-emulator     0                   4916510c10c2b       cloud-spanner-emulator-85f6b7fc65-zjsh8     default
	aa8b68706bef2       nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd                     19 minutes ago      Running             nvidia-device-plugin-ctr   0                   48071d8f52e3b       nvidia-device-plugin-daemonset-d29s2        kube-system
	59bd3def26ae0       ba04bb24b9575                                                                                                                19 minutes ago      Running             storage-provisioner        0                   a23e97739eb30       storage-provisioner                         kube-system
	1f08a0b17053c       138784d87c9c5                                                                                                                19 minutes ago      Running             coredns                    0                   41c06ea8e8dab       coredns-66bc5c9577-5b26c                    kube-system
	2c89530d2d498       05baa95f5142d                                                                                                                19 minutes ago      Running             kube-proxy                 0                   3401ff6190b48       kube-proxy-rr8rw                            kube-system
	9184b772f37f1       7eb2c6ff0c5a7                                                                                                                19 minutes ago      Running             kube-controller-manager    0                   431c21e60ec20       kube-controller-manager-addons-006450       kube-system
	16d61d5012e7c       b5f57ec6b9867                                                                                                                19 minutes ago      Running             kube-scheduler             0                   a52e4c8396f58       kube-scheduler-addons-006450                kube-system
	e5031a852e78a       43911e833d64d                                                                                                                19 minutes ago      Running             kube-apiserver             0                   dc93b2d9f3eda       kube-apiserver-addons-006450                kube-system
	57ec1a2227a7f       a1894772a478e                                                                                                                19 minutes ago      Running             etcd                       0                   31b1c12560e88       etcd-addons-006450                          kube-system
	
	
	==> controller_ingress [f2a47081481d] <==
	I1006 14:22:56.385446       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1006 14:22:56.386528       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-675c5ddd98-k4m4k"
	I1006 14:22:56.394070       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-k4m4k" node="addons-006450"
	I1006 14:22:56.402954       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-k4m4k" node="addons-006450"
	I1006 14:22:56.426765       7 controller.go:228] "Backend successfully reloaded"
	I1006 14:22:56.426836       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I1006 14:22:56.427067       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-k4m4k", UID:"7af3dc46-9579-4103-920a-676be59d642a", APIVersion:"v1", ResourceVersion:"1329", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1006 14:33:02.321357       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1006 14:33:02.322712       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I1006 14:33:02.330136       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W1006 14:33:02.330482       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1006 14:33:02.331181       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1006 14:33:02.331427       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"422cb3a2-2f49-4a83-8c3d-5e3e2b23e211", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2696", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I1006 14:33:02.410927       7 controller.go:228] "Backend successfully reloaded"
	I1006 14:33:02.411811       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-k4m4k", UID:"7af3dc46-9579-4103-920a-676be59d642a", APIVersion:"v1", ResourceVersion:"1329", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1006 14:33:05.664002       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1006 14:33:05.664777       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1006 14:33:05.705445       7 controller.go:228] "Backend successfully reloaded"
	I1006 14:33:05.706043       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-k4m4k", UID:"7af3dc46-9579-4103-920a-676be59d642a", APIVersion:"v1", ResourceVersion:"1329", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1006 14:33:08.997892       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1006 14:33:56.393716       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	W1006 14:33:56.398162       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1006 14:33:56.398258       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"422cb3a2-2f49-4a83-8c3d-5e3e2b23e211", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2835", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W1006 14:39:14.048033       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1006 14:39:17.381538       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [1f08a0b17053] <==
	[INFO] 10.244.0.7:56542 - 33336 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002392707s
	[INFO] 10.244.0.7:56542 - 54232 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000178786s
	[INFO] 10.244.0.7:56542 - 7333 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000136449s
	[INFO] 10.244.0.7:33056 - 46078 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000279019s
	[INFO] 10.244.0.7:33056 - 46299 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000298719s
	[INFO] 10.244.0.7:56424 - 24690 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0002456s
	[INFO] 10.244.0.7:56424 - 24468 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000268837s
	[INFO] 10.244.0.7:59046 - 6419 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000205798s
	[INFO] 10.244.0.7:59046 - 6231 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000164198s
	[INFO] 10.244.0.7:57987 - 61663 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001803179s
	[INFO] 10.244.0.7:57987 - 61843 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002072492s
	[INFO] 10.244.0.7:52614 - 11017 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000243541s
	[INFO] 10.244.0.7:52614 - 10853 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000192292s
	[INFO] 10.244.0.26:44951 - 63731 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000272135s
	[INFO] 10.244.0.26:43415 - 16328 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000118021s
	[INFO] 10.244.0.26:39889 - 25486 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139116s
	[INFO] 10.244.0.26:39105 - 18081 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154197s
	[INFO] 10.244.0.26:56273 - 11862 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000274474s
	[INFO] 10.244.0.26:44777 - 21446 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000313833s
	[INFO] 10.244.0.26:47488 - 37580 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00207181s
	[INFO] 10.244.0.26:50437 - 7597 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001591703s
	[INFO] 10.244.0.26:49063 - 42612 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001943943s
	[INFO] 10.244.0.26:39378 - 64309 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00241089s
	[INFO] 10.244.0.30:44861 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00027604s
	[INFO] 10.244.0.30:48981 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134175s
	
	
	==> describe nodes <==
	Name:               addons-006450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-006450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=addons-006450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T14_21_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-006450
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 14:21:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-006450
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 14:41:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 14:38:28 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 14:38:28 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 14:38:28 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 14:38:28 +0000   Mon, 06 Oct 2025 14:21:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-006450
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0364ef7d33ec438ea80b3763bd3b6ccc
	  System UUID:                35426571-e524-4094-b847-4e5d39cdb9e6
	  Boot ID:                    2fc2fcec-a145-448c-8b5d-9e614a6ff2df
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m52s
	  default                     cloud-spanner-emulator-85f6b7fc65-zjsh8                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  gadget                      gadget-mwfpm                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-k4m4k                     100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-5b26c                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-addons-006450                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kube-apiserver-addons-006450                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-addons-006450                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-rr8rw                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-addons-006450                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 nvidia-device-plugin-daemonset-d29s2                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  local-path-storage          helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  local-path-storage          local-path-provisioner-648f6765c9-fmrx9                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-nfj9q                                0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (4%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node addons-006450 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node addons-006450 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node addons-006450 status is now: NodeHasSufficientPID
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node addons-006450 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node addons-006450 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m                kubelet          Node addons-006450 status is now: NodeHasSufficientPID
	  Normal   NodeReady                19m                kubelet          Node addons-006450 status is now: NodeReady
	  Normal   RegisteredNode           19m                node-controller  Node addons-006450 event: Registered Node addons-006450 in Controller
	
	
	==> dmesg <==
	[Oct 6 12:53] systemd-journald[226]: Failed to send stream file descriptor to service manager: Connection refused
	[Oct 6 12:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 13:11] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 14:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [57ec1a2227a7] <==
	{"level":"warn","ts":"2025-10-06T14:21:25.170769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.187747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.208847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.304509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:49.763248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:49.777779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.281548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.337199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.387982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.452451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.481768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.595747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.614909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.631591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.664368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.680487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.697752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.764439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.772435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T14:31:23.319583Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1712}
	{"level":"info","ts":"2025-10-06T14:31:23.387482Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1712,"took":"67.368638ms","hash":2638762742,"current-db-size-bytes":8409088,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4431872,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2025-10-06T14:31:23.387544Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2638762742,"revision":1712,"compact-revision":-1}
	{"level":"info","ts":"2025-10-06T14:36:23.326234Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2212}
	{"level":"info","ts":"2025-10-06T14:36:23.346470Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2212,"took":"19.456428ms","hash":564227051,"current-db-size-bytes":8409088,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":5521408,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2025-10-06T14:36:23.346528Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":564227051,"revision":2212,"compact-revision":1712}
	
	
	==> kernel <==
	 14:41:04 up 21:23,  0 user,  load average: 1.46, 0.81, 1.50
	Linux addons-006450 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [e5031a852e78] <==
	I1006 14:32:03.083740       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1006 14:32:03.182326       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1006 14:32:03.182490       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1006 14:32:03.230092       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1006 14:32:04.084579       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1006 14:32:04.275777       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1006 14:32:21.681568       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33172: use of closed network connection
	E1006 14:32:22.105912       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33216: use of closed network connection
	I1006 14:32:31.969654       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.103.92"}
	I1006 14:33:02.323607       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1006 14:33:02.632504       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.99.116"}
	I1006 14:33:18.931472       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1006 14:39:12.704082       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 14:39:12.704131       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 14:39:12.737069       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 14:39:12.737126       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 14:39:12.753776       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 14:39:12.753816       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 14:39:12.823015       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 14:39:12.823372       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 14:39:12.862406       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 14:39:12.862457       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1006 14:39:13.754468       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1006 14:39:13.862749       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1006 14:39:13.997181       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [9184b772f37f] <==
	E1006 14:40:24.691410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:40:29.729000       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:40:29.730305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:40:30.190126       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:40:30.191691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:40:35.894081       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:40:35.895423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:40:41.200333       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:40:41.201534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:40:53.229140       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:40:53.230447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:40:53.427068       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:40:53.428119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:40:55.331727       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:40:55.334295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:40:56.271432       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:40:56.272508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:40:57.703452       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:40:57.704720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:41:00.383779       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:41:00.385486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:41:01.422964       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:41:01.424261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:41:01.700093       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:41:01.701382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [2c89530d2d49] <==
	I1006 14:21:35.738189       1 server_linux.go:53] "Using iptables proxy"
	I1006 14:21:35.837556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 14:21:35.938392       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 14:21:35.938475       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1006 14:21:35.938596       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 14:21:36.026114       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 14:21:36.026170       1 server_linux.go:132] "Using iptables Proxier"
	I1006 14:21:36.061180       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 14:21:36.061523       1 server.go:527] "Version info" version="v1.34.1"
	I1006 14:21:36.061547       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:21:36.062743       1 config.go:200] "Starting service config controller"
	I1006 14:21:36.062767       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 14:21:36.063897       1 config.go:106] "Starting endpoint slice config controller"
	I1006 14:21:36.063910       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 14:21:36.063943       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 14:21:36.063947       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 14:21:36.064746       1 config.go:309] "Starting node config controller"
	I1006 14:21:36.064764       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 14:21:36.064771       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 14:21:36.163641       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 14:21:36.164636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 14:21:36.164662       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [16d61d5012e7] <==
	I1006 14:21:27.016131       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:21:27.020700       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:21:27.020968       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:21:27.021894       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 14:21:27.023886       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1006 14:21:27.029893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 14:21:27.030068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1006 14:21:27.038289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1006 14:21:27.038473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 14:21:27.038518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1006 14:21:27.038557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1006 14:21:27.040442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1006 14:21:27.040803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1006 14:21:27.040860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1006 14:21:27.040908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 14:21:27.040970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 14:21:27.041025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 14:21:27.041090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 14:21:27.041145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 14:21:27.041189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 14:21:27.041328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 14:21:27.041374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1006 14:21:27.041451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 14:21:27.041497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1006 14:21:28.621743       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 14:39:55 addons-006450 kubelet[2258]: E1006 14:39:55.887415    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f" podUID="8ee87356-c397-4036-9636-0e3b8e468249"
	Oct 06 14:39:56 addons-006450 kubelet[2258]: I1006 14:39:56.752303    2258 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-d29s2" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 14:39:57 addons-006450 kubelet[2258]: E1006 14:39:57.752044    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:39:57 addons-006450 kubelet[2258]: E1006 14:39:57.754390    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:40:08 addons-006450 kubelet[2258]: E1006 14:40:08.754112    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:40:08 addons-006450 kubelet[2258]: E1006 14:40:08.760074    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f" podUID="8ee87356-c397-4036-9636-0e3b8e468249"
	Oct 06 14:40:08 addons-006450 kubelet[2258]: E1006 14:40:08.768583    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:40:20 addons-006450 kubelet[2258]: E1006 14:40:20.752038    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:40:20 addons-006450 kubelet[2258]: E1006 14:40:20.765171    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f" podUID="8ee87356-c397-4036-9636-0e3b8e468249"
	Oct 06 14:40:21 addons-006450 kubelet[2258]: I1006 14:40:21.752253    2258 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 14:40:23 addons-006450 kubelet[2258]: E1006 14:40:23.754154    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:40:31 addons-006450 kubelet[2258]: E1006 14:40:31.753986    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f" podUID="8ee87356-c397-4036-9636-0e3b8e468249"
	Oct 06 14:40:33 addons-006450 kubelet[2258]: E1006 14:40:33.751945    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:40:37 addons-006450 kubelet[2258]: E1006 14:40:37.753974    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:40:42 addons-006450 kubelet[2258]: W1006 14:40:42.305969    2258 logging.go:55] [core] [Channel #74 SubChannel #75]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Oct 06 14:40:43 addons-006450 kubelet[2258]: E1006 14:40:43.884596    2258 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:40:43 addons-006450 kubelet[2258]: E1006 14:40:43.884650    2258 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:40:43 addons-006450 kubelet[2258]: E1006 14:40:43.884725    2258 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f_local-path-storage(8ee87356-c397-4036-9636-0e3b8e468249): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 06 14:40:43 addons-006450 kubelet[2258]: E1006 14:40:43.884759    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f" podUID="8ee87356-c397-4036-9636-0e3b8e468249"
	Oct 06 14:40:46 addons-006450 kubelet[2258]: E1006 14:40:46.752289    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:40:48 addons-006450 kubelet[2258]: E1006 14:40:48.756664    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:40:54 addons-006450 kubelet[2258]: E1006 14:40:54.759369    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f" podUID="8ee87356-c397-4036-9636-0e3b8e468249"
	Oct 06 14:40:59 addons-006450 kubelet[2258]: E1006 14:40:59.753891    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:41:01 addons-006450 kubelet[2258]: I1006 14:41:01.751559    2258 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-85f6b7fc65-zjsh8" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 14:41:01 addons-006450 kubelet[2258]: E1006 14:41:01.751647    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	
	
	==> storage-provisioner [59bd3def26ae] <==
	W1006 14:40:39.827140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:41.830715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:41.834700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:43.838653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:43.845267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:45.848543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:45.852998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:47.856324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:47.860279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:49.863351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:49.867977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:51.871189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:51.875883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:53.879506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:53.887820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:55.891644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:55.897967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:57.901894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:57.906641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:59.909920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:40:59.916863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:41:01.919619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:41:01.925076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:41:03.928393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:41:03.939263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006450 -n addons-006450
helpers_test.go:269: (dbg) Run:  kubectl --context addons-006450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-006450 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-006450 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f: exit status 1 (107.277356ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-006450/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:33:02 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6jbnj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6jbnj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-006450
	  Warning  Failed     6m27s (x3 over 8m2s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m57s (x5 over 8m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4m56s (x5 over 8m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m56s (x2 over 7m22s)  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m51s (x22 over 8m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m51s (x22 over 8m1s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-006450/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:33:09 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zxjwt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-zxjwt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  7m56s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-006450
	  Normal   Pulling    4m53s (x5 over 7m56s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     4m53s (x5 over 7m55s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m53s (x5 over 7m55s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m49s (x20 over 7m55s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m36s (x21 over 7m55s)  kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2p7zd (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-2p7zd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-t2tnf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-s6s8k" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-006450 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 addons disable ingress-dns --alsologtostderr -v=1: (1.174733222s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 addons disable ingress --alsologtostderr -v=1: (7.792580367s)
--- FAIL: TestAddons/parallel/Ingress (492.56s)

                                                
                                    
x
+
TestAddons/parallel/CSI (391.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1006 14:32:48.704830  805351 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1006 14:32:48.713964  805351 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1006 14:32:48.713989  805351 kapi.go:107] duration metric: took 12.24303ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 12.253311ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-006450 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-006450 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [6e933703-adb9-4036-9530-9f2296a30c95] Pending
helpers_test.go:352: "task-pv-pod" [6e933703-adb9-4036-9530-9f2296a30c95] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006450 -n addons-006450
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-10-06 14:39:09.565506036 +0000 UTC m=+1128.363374815
addons_test.go:567: (dbg) Run:  kubectl --context addons-006450 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-006450 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-006450/192.168.49.2
Start Time:       Mon, 06 Oct 2025 14:33:09 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.32
IPs:
IP:  10.244.0.32
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zxjwt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-zxjwt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/task-pv-pod to addons-006450
Normal   Pulling    2m57s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m57s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m57s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Warning  Failed     53s (x20 over 5m59s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    40s (x21 over 5m59s)   kubelet            Back-off pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-006450 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-006450 logs task-pv-pod -n default: exit status 1 (109.058567ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-006450 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-006450
helpers_test.go:243: (dbg) docker inspect addons-006450:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90",
	        "Created": "2025-10-06T14:21:00.2900908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 806507,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:21:00.391293391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/hostname",
	        "HostsPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/hosts",
	        "LogPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90-json.log",
	        "Name": "/addons-006450",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-006450:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-006450",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90",
	                "LowerDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec-init/diff:/var/lib/docker/overlay2/e377610d56c190eb4e6f5af0c002c2b677875f0d15e22ba07535ade05d2c2018/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-006450",
	                "Source": "/var/lib/docker/volumes/addons-006450/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-006450",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-006450",
	                "name.minikube.sigs.k8s.io": "addons-006450",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09ddbf4aed5db91393a32b35522feed3626a6a03e08f6e0448ebb5aad5998ddd",
	            "SandboxKey": "/var/run/docker/netns/09ddbf4aed5d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37510"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37508"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37509"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-006450": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:f4:99:c4:a9:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "165f6e38041442732f4da1d95818020ddb3d0bf16ac6242c03ef818c1b73d7fb",
	                    "EndpointID": "b2523cc159053c0b4c03cccafdf39f8b82bb8b5c7e911427f39eed28857482fc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-006450",
	                        "fedf355814c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-006450 -n addons-006450
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 logs -n 25: (1.125722372s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-379615 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-379615   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-379615                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-379615   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ -o=json --download-only -p download-only-023239 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                              │ download-only-023239   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ minikube               │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-023239                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-023239   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-379615                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-379615   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-023239                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-023239   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ --download-only -p download-docker-403886 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-403886 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ -p download-docker-403886                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-403886 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ --download-only -p binary-mirror-859483 --alsologtostderr --binary-mirror http://127.0.0.1:42473 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-859483   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ -p binary-mirror-859483                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-859483   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ addons  │ enable dashboard -p addons-006450                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-006450                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ start   │ -p addons-006450 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:23 UTC │
	│ addons  │ addons-006450 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:31 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ enable headlamp -p addons-006450 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ ip      │ addons-006450 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:33 UTC │ 06 Oct 25 14:33 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:20:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:20:33.934280  806109 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:20:33.934452  806109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:33.934482  806109 out.go:374] Setting ErrFile to fd 2...
	I1006 14:20:33.934503  806109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:33.934791  806109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 14:20:33.935342  806109 out.go:368] Setting JSON to false
	I1006 14:20:33.936278  806109 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":75786,"bootTime":1759684648,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1006 14:20:33.936380  806109 start.go:140] virtualization:  
	I1006 14:20:33.939820  806109 out.go:179] * [addons-006450] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 14:20:33.942845  806109 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:20:33.942925  806109 notify.go:220] Checking for updates...
	I1006 14:20:33.949235  806109 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:20:33.952125  806109 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:20:33.955049  806109 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	I1006 14:20:33.957833  806109 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 14:20:33.960596  806109 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:20:33.963595  806109 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:20:33.986303  806109 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 14:20:33.986439  806109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:34.050609  806109 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-06 14:20:34.04143491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:34.050721  806109 docker.go:318] overlay module found
	I1006 14:20:34.053842  806109 out.go:179] * Using the docker driver based on user configuration
	I1006 14:20:34.056712  806109 start.go:304] selected driver: docker
	I1006 14:20:34.056733  806109 start.go:924] validating driver "docker" against <nil>
	I1006 14:20:34.056748  806109 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:20:34.057477  806109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:34.111822  806109 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-06 14:20:34.102783115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:34.111982  806109 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:20:34.112211  806109 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:20:34.115275  806109 out.go:179] * Using Docker driver with root privileges
	I1006 14:20:34.118173  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:20:34.118253  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:20:34.118263  806109 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1006 14:20:34.118342  806109 start.go:348] cluster config:
	{Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1006 14:20:34.121483  806109 out.go:179] * Starting "addons-006450" primary control-plane node in "addons-006450" cluster
	I1006 14:20:34.124347  806109 cache.go:123] Beginning downloading kic base image for docker with docker
	I1006 14:20:34.127249  806109 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:20:34.130100  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:20:34.130168  806109 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1006 14:20:34.130177  806109 cache.go:58] Caching tarball of preloaded images
	I1006 14:20:34.130222  806109 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:20:34.130282  806109 preload.go:233] Found /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1006 14:20:34.130293  806109 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1006 14:20:34.130624  806109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json ...
	I1006 14:20:34.130655  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json: {Name:mk78082a38967c23c9e0fec5499d829d2aa5600d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:20:34.149434  806109 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 14:20:34.149575  806109 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 14:20:34.149597  806109 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1006 14:20:34.149602  806109 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1006 14:20:34.149610  806109 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1006 14:20:34.149626  806109 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1006 14:20:52.383725  806109 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1006 14:20:52.383777  806109 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:20:52.383807  806109 start.go:360] acquireMachinesLock for addons-006450: {Name:mk6a488a7fef2004d8c41401b261288db1a55041 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:20:52.383940  806109 start.go:364] duration metric: took 111.276µs to acquireMachinesLock for "addons-006450"
	I1006 14:20:52.383972  806109 start.go:93] Provisioning new machine with config: &{Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1006 14:20:52.384058  806109 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:20:52.387398  806109 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1006 14:20:52.387686  806109 start.go:159] libmachine.API.Create for "addons-006450" (driver="docker")
	I1006 14:20:52.387754  806109 client.go:168] LocalClient.Create starting
	I1006 14:20:52.387880  806109 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem
	I1006 14:20:52.755986  806109 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem
	I1006 14:20:54.000215  806109 cli_runner.go:164] Run: docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:20:54.021843  806109 cli_runner.go:211] docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:20:54.021935  806109 network_create.go:284] running [docker network inspect addons-006450] to gather additional debugging logs...
	I1006 14:20:54.021951  806109 cli_runner.go:164] Run: docker network inspect addons-006450
	W1006 14:20:54.038245  806109 cli_runner.go:211] docker network inspect addons-006450 returned with exit code 1
	I1006 14:20:54.038287  806109 network_create.go:287] error running [docker network inspect addons-006450]: docker network inspect addons-006450: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-006450 not found
	I1006 14:20:54.038299  806109 network_create.go:289] output of [docker network inspect addons-006450]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-006450 not found
	
	** /stderr **
	I1006 14:20:54.038438  806109 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:20:54.055471  806109 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d4c380}
	I1006 14:20:54.055517  806109 network_create.go:124] attempt to create docker network addons-006450 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:20:54.055572  806109 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-006450 addons-006450
	I1006 14:20:54.110341  806109 network_create.go:108] docker network addons-006450 192.168.49.0/24 created
	I1006 14:20:54.110371  806109 kic.go:121] calculated static IP "192.168.49.2" for the "addons-006450" container
	I1006 14:20:54.110459  806109 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:20:54.127884  806109 cli_runner.go:164] Run: docker volume create addons-006450 --label name.minikube.sigs.k8s.io=addons-006450 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:20:54.148808  806109 oci.go:103] Successfully created a docker volume addons-006450
	I1006 14:20:54.148892  806109 cli_runner.go:164] Run: docker run --rm --name addons-006450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --entrypoint /usr/bin/test -v addons-006450:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:20:56.324467  806109 cli_runner.go:217] Completed: docker run --rm --name addons-006450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --entrypoint /usr/bin/test -v addons-006450:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.175532295s)
	I1006 14:20:56.324511  806109 oci.go:107] Successfully prepared a docker volume addons-006450
	I1006 14:20:56.324545  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:20:56.324566  806109 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:20:56.324627  806109 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-006450:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:21:00.168028  806109 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-006450:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (3.843356071s)
	I1006 14:21:00.168062  806109 kic.go:203] duration metric: took 3.843492791s to extract preloaded images to volume ...
	W1006 14:21:00.168228  806109 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 14:21:00.168353  806109 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:21:00.269120  806109 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-006450 --name addons-006450 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-006450 --network addons-006450 --ip 192.168.49.2 --volume addons-006450:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:21:00.667135  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Running}}
	I1006 14:21:00.686913  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:00.708915  806109 cli_runner.go:164] Run: docker exec addons-006450 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:21:00.766467  806109 oci.go:144] the created container "addons-006450" has a running status.
	I1006 14:21:00.766496  806109 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa...
	I1006 14:21:01.209222  806109 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:21:01.244403  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:01.278442  806109 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:21:01.278462  806109 kic_runner.go:114] Args: [docker exec --privileged addons-006450 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:21:01.342721  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:01.366223  806109 machine.go:93] provisionDockerMachine start ...
	I1006 14:21:01.366312  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.386115  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.388381  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.388404  806109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:21:01.583723  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006450
	
	I1006 14:21:01.583748  806109 ubuntu.go:182] provisioning hostname "addons-006450"
	I1006 14:21:01.583829  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.604321  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.604631  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.604648  806109 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-006450 && echo "addons-006450" | sudo tee /etc/hostname
	I1006 14:21:01.762558  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006450
	
	I1006 14:21:01.762702  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.783081  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.783379  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.783396  806109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-006450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-006450/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-006450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:21:01.932033  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:21:01.932056  806109 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-803497/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-803497/.minikube}
	I1006 14:21:01.932087  806109 ubuntu.go:190] setting up certificates
	I1006 14:21:01.932101  806109 provision.go:84] configureAuth start
	I1006 14:21:01.932162  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:01.953264  806109 provision.go:143] copyHostCerts
	I1006 14:21:01.953391  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/ca.pem (1082 bytes)
	I1006 14:21:01.953509  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/cert.pem (1123 bytes)
	I1006 14:21:01.953572  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/key.pem (1675 bytes)
	I1006 14:21:01.953642  806109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem org=jenkins.addons-006450 san=[127.0.0.1 192.168.49.2 addons-006450 localhost minikube]
	I1006 14:21:02.364998  806109 provision.go:177] copyRemoteCerts
	I1006 14:21:02.365098  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:21:02.365155  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.381521  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:02.475833  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:21:02.494054  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:21:02.512540  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1006 14:21:02.530771  806109 provision.go:87] duration metric: took 598.646522ms to configureAuth
	I1006 14:21:02.530795  806109 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:21:02.531031  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:02.531089  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.548485  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.548797  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.548814  806109 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1006 14:21:02.680553  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1006 14:21:02.680572  806109 ubuntu.go:71] root file system type: overlay
	I1006 14:21:02.680735  806109 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1006 14:21:02.680812  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.697880  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.698189  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.698287  806109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1006 14:21:02.846019  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1006 14:21:02.846167  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.863632  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.864002  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.864029  806109 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1006 14:21:03.799164  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:56:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-06 14:21:02.840466123 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1006 14:21:03.799202  806109 machine.go:96] duration metric: took 2.432959766s to provisionDockerMachine
	I1006 14:21:03.799214  806109 client.go:171] duration metric: took 11.411453149s to LocalClient.Create
	I1006 14:21:03.799235  806109 start.go:167] duration metric: took 11.41157629s to libmachine.API.Create "addons-006450"
	I1006 14:21:03.799246  806109 start.go:293] postStartSetup for "addons-006450" (driver="docker")
	I1006 14:21:03.799257  806109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:21:03.799333  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:21:03.799381  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:03.817018  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:03.911433  806109 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:21:03.914606  806109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:21:03.914683  806109 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:21:03.914699  806109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/addons for local assets ...
	I1006 14:21:03.914767  806109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/files for local assets ...
	I1006 14:21:03.914795  806109 start.go:296] duration metric: took 115.542737ms for postStartSetup
	I1006 14:21:03.915135  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:03.931532  806109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json ...
	I1006 14:21:03.931854  806109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:21:03.931910  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:03.948768  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.041025  806109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:21:04.046229  806109 start.go:128] duration metric: took 11.662156071s to createHost
	I1006 14:21:04.046252  806109 start.go:83] releasing machines lock for "addons-006450", held for 11.662297525s
	I1006 14:21:04.046327  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:04.063754  806109 ssh_runner.go:195] Run: cat /version.json
	I1006 14:21:04.063815  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:04.063893  806109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:21:04.063975  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:04.082777  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.099024  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.268948  806109 ssh_runner.go:195] Run: systemctl --version
	I1006 14:21:04.275561  806109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:21:04.279819  806109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:21:04.279895  806109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:21:04.306291  806109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 14:21:04.306318  806109 start.go:495] detecting cgroup driver to use...
	I1006 14:21:04.306351  806109 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:21:04.306446  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:04.320125  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1006 14:21:04.329116  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1006 14:21:04.338037  806109 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1006 14:21:04.338156  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1006 14:21:04.347404  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:21:04.357144  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1006 14:21:04.366129  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:21:04.374845  806109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:21:04.382821  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1006 14:21:04.391940  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1006 14:21:04.400832  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1006 14:21:04.409604  806109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:21:04.417019  806109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:21:04.424313  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:04.532131  806109 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1006 14:21:04.625905  806109 start.go:495] detecting cgroup driver to use...
	I1006 14:21:04.625977  806109 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:21:04.626053  806109 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1006 14:21:04.640910  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:04.654413  806109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:21:04.685901  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:04.698603  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1006 14:21:04.711790  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:04.725497  806109 ssh_runner.go:195] Run: which cri-dockerd
	I1006 14:21:04.729345  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1006 14:21:04.737737  806109 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1006 14:21:04.751393  806109 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1006 14:21:04.873692  806109 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1006 14:21:04.984971  806109 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1006 14:21:04.985108  806109 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1006 14:21:05.002843  806109 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1006 14:21:05.020602  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:05.142830  806109 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1006 14:21:05.525909  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:21:05.538352  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1006 14:21:05.551902  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:21:05.567756  806109 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1006 14:21:05.691941  806109 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1006 14:21:05.814431  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:05.934017  806109 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1006 14:21:05.949991  806109 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1006 14:21:05.962662  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:06.092789  806109 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1006 14:21:06.164834  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:21:06.178359  806109 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1006 14:21:06.178520  806109 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1006 14:21:06.182231  806109 start.go:563] Will wait 60s for crictl version
	I1006 14:21:06.182343  806109 ssh_runner.go:195] Run: which crictl
	I1006 14:21:06.185820  806109 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:21:06.209958  806109 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1006 14:21:06.210077  806109 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:21:06.232534  806109 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:21:06.261297  806109 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1006 14:21:06.261408  806109 cli_runner.go:164] Run: docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:21:06.277505  806109 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:21:06.281321  806109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:21:06.291363  806109 kubeadm.go:883] updating cluster {Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:21:06.291470  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:21:06.291533  806109 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:21:06.310531  806109 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1006 14:21:06.310560  806109 docker.go:621] Images already preloaded, skipping extraction
	I1006 14:21:06.310627  806109 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:21:06.329469  806109 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1006 14:21:06.329494  806109 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:21:06.329511  806109 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1006 14:21:06.329612  806109 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-006450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:21:06.329683  806109 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1006 14:21:06.383455  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:21:06.383492  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:21:06.383512  806109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:21:06.383538  806109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-006450 NodeName:addons-006450 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:21:06.383695  806109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-006450"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:21:06.383769  806109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:21:06.391605  806109 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:21:06.391780  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:21:06.399572  806109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1006 14:21:06.412296  806109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:21:06.425462  806109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1006 14:21:06.438424  806109 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:21:06.442129  806109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:21:06.452170  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:06.565870  806109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:06.583339  806109 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450 for IP: 192.168.49.2
	I1006 14:21:06.583363  806109 certs.go:195] generating shared ca certs ...
	I1006 14:21:06.583383  806109 certs.go:227] acquiring lock for ca certs: {Name:mk78547ccc35462965e66385811a001935f7f131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.583518  806109 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key
	I1006 14:21:06.758169  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt ...
	I1006 14:21:06.758199  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt: {Name:mke50bad3f8d3d8c6fc7003f3930a8a3fa326b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.758398  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key ...
	I1006 14:21:06.758412  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key: {Name:mk5abe63bfac59b481f1b34a2e6312b79c376290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.758508  806109 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key
	I1006 14:21:07.226648  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt ...
	I1006 14:21:07.226681  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt: {Name:mk35f86863953865131b747e65133218cef7ac69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:07.226896  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key ...
	I1006 14:21:07.226910  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key: {Name:mk32f77223b3be8cca86a275e013030fd8c48071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:07.227011  806109 certs.go:257] generating profile certs ...
	I1006 14:21:07.227078  806109 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key
	I1006 14:21:07.227095  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt with IP's: []
	I1006 14:21:08.232319  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt ...
	I1006 14:21:08.232348  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: {Name:mk237396132558310e9472dccd1a03e68855c562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.232531  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key ...
	I1006 14:21:08.232540  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key: {Name:mkddc2eaac1b60c97f1b0888b122f0d14ff81585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.232614  806109 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa
	I1006 14:21:08.232629  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 14:21:08.361861  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa ...
	I1006 14:21:08.361891  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa: {Name:mk44f5f6071204e4219adaa4cbde67bf1f671150 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.362071  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa ...
	I1006 14:21:08.362085  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa: {Name:mkaddbc6367afe0cdf204382e298fb821349ec34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.362173  806109 certs.go:382] copying /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa -> /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt
	I1006 14:21:08.362251  806109 certs.go:386] copying /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa -> /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key
	I1006 14:21:08.362308  806109 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key
	I1006 14:21:08.362337  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt with IP's: []
	I1006 14:21:09.174420  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt ...
	I1006 14:21:09.174451  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt: {Name:mk6a018d5a25b41127abffe602062c5fb3c9da1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:09.174648  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key ...
	I1006 14:21:09.174662  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key: {Name:mk882903eb03fda7b8a7b7a45601eaab350263dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:09.174869  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:21:09.174912  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:21:09.174936  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:21:09.174963  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem (1675 bytes)
	I1006 14:21:09.175647  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:21:09.195248  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 14:21:09.214696  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:21:09.234148  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 14:21:09.252534  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 14:21:09.270877  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:21:09.289342  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:21:09.307151  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:21:09.325295  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:21:09.343473  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:21:09.356830  806109 ssh_runner.go:195] Run: openssl version
	I1006 14:21:09.363194  806109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:21:09.371688  806109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.375519  806109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 14:21 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.375603  806109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.421333  806109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:21:09.430436  806109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:09.434631  806109 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:21:09.434680  806109 kubeadm.go:400] StartCluster: {Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:09.434811  806109 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1006 14:21:09.456777  806109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:21:09.465021  806109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:21:09.473033  806109 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:21:09.473109  806109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:21:09.480866  806109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:21:09.480886  806109 kubeadm.go:157] found existing configuration files:
	
	I1006 14:21:09.480957  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:21:09.488809  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:21:09.488875  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:21:09.496674  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:21:09.504791  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:21:09.504865  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:21:09.512822  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:21:09.520596  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:21:09.520672  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:21:09.528333  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:21:09.536500  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:21:09.536573  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:21:09.544325  806109 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:21:09.582751  806109 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:21:09.582817  806109 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:21:09.609398  806109 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:21:09.609476  806109 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 14:21:09.609518  806109 kubeadm.go:318] OS: Linux
	I1006 14:21:09.609570  806109 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:21:09.609625  806109 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 14:21:09.609679  806109 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:21:09.609733  806109 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:21:09.609792  806109 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:21:09.609847  806109 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:21:09.609902  806109 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:21:09.609955  806109 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:21:09.610011  806109 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 14:21:09.690823  806109 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:21:09.690944  806109 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:21:09.691059  806109 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:21:09.716052  806109 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:21:09.722414  806109 out.go:252]   - Generating certificates and keys ...
	I1006 14:21:09.722525  806109 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:21:09.722604  806109 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:21:10.515752  806109 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:21:11.397580  806109 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:21:12.455188  806109 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:21:12.900218  806109 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:21:13.333042  806109 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:21:13.333192  806109 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-006450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:21:13.558599  806109 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:21:13.558992  806109 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-006450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:21:14.483025  806109 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:21:15.088755  806109 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:21:15.636700  806109 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:21:15.637033  806109 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:21:16.739302  806109 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:21:17.694897  806109 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:21:18.343756  806109 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:21:18.712603  806109 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:21:19.266809  806109 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:21:19.267485  806109 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:21:19.270758  806109 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:21:19.274504  806109 out.go:252]   - Booting up control plane ...
	I1006 14:21:19.274628  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:21:19.274721  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:21:19.275790  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:21:19.292829  806109 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:21:19.293280  806109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:21:19.301074  806109 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:21:19.301395  806109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:21:19.301643  806109 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:21:19.440373  806109 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:21:19.440504  806109 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:21:20.940044  806109 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501293606s
	I1006 14:21:20.940318  806109 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:21:20.940416  806109 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:21:20.940516  806109 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:21:20.940602  806109 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:21:24.828532  806109 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.887425512s
	I1006 14:21:27.037731  806109 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.097440124s
	I1006 14:21:27.942161  806109 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001481359s
	I1006 14:21:27.961418  806109 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 14:21:27.977744  806109 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 14:21:27.992347  806109 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 14:21:27.992563  806109 kubeadm.go:318] [mark-control-plane] Marking the node addons-006450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 14:21:28.013758  806109 kubeadm.go:318] [bootstrap-token] Using token: e1p0fh.afy23ij81unzzcb1
	I1006 14:21:28.016851  806109 out.go:252]   - Configuring RBAC rules ...
	I1006 14:21:28.016992  806109 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 14:21:28.022251  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 14:21:28.031560  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 14:21:28.036500  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 14:21:28.041064  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 14:21:28.048112  806109 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 14:21:28.349107  806109 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 14:21:28.790402  806109 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 14:21:29.351014  806109 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 14:21:29.352283  806109 kubeadm.go:318] 
	I1006 14:21:29.352364  806109 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 14:21:29.352375  806109 kubeadm.go:318] 
	I1006 14:21:29.352461  806109 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 14:21:29.352472  806109 kubeadm.go:318] 
	I1006 14:21:29.352498  806109 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 14:21:29.352567  806109 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 14:21:29.352625  806109 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 14:21:29.352634  806109 kubeadm.go:318] 
	I1006 14:21:29.352691  806109 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 14:21:29.352700  806109 kubeadm.go:318] 
	I1006 14:21:29.352750  806109 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 14:21:29.352759  806109 kubeadm.go:318] 
	I1006 14:21:29.352815  806109 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 14:21:29.352899  806109 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 14:21:29.352974  806109 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 14:21:29.352983  806109 kubeadm.go:318] 
	I1006 14:21:29.353071  806109 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 14:21:29.353153  806109 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 14:21:29.353161  806109 kubeadm.go:318] 
	I1006 14:21:29.353249  806109 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token e1p0fh.afy23ij81unzzcb1 \
	I1006 14:21:29.353360  806109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:76fb571382ca9706d46d85899e8a2e961f0c518218722f3b163e5bd4963fb9a1 \
	I1006 14:21:29.353397  806109 kubeadm.go:318] 	--control-plane 
	I1006 14:21:29.353406  806109 kubeadm.go:318] 
	I1006 14:21:29.353495  806109 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 14:21:29.353503  806109 kubeadm.go:318] 
	I1006 14:21:29.353588  806109 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token e1p0fh.afy23ij81unzzcb1 \
	I1006 14:21:29.353698  806109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:76fb571382ca9706d46d85899e8a2e961f0c518218722f3b163e5bd4963fb9a1 
	I1006 14:21:29.356907  806109 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 14:21:29.357135  806109 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 14:21:29.357260  806109 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:21:29.357283  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:21:29.357298  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:21:29.360240  806109 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1006 14:21:29.363197  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1006 14:21:29.371108  806109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1006 14:21:29.386109  806109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 14:21:29.386176  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:29.386250  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-006450 minikube.k8s.io/updated_at=2025_10_06T14_21_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=addons-006450 minikube.k8s.io/primary=true
	I1006 14:21:29.530062  806109 ops.go:34] apiserver oom_adj: -16
	I1006 14:21:29.530192  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:30.031190  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:30.530267  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:31.030839  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:31.530611  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:32.030258  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:32.530722  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:33.030864  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:33.530331  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:34.030732  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:34.138751  806109 kubeadm.go:1113] duration metric: took 4.752637843s to wait for elevateKubeSystemPrivileges
	I1006 14:21:34.138779  806109 kubeadm.go:402] duration metric: took 24.704102384s to StartCluster
	I1006 14:21:34.138798  806109 settings.go:142] acquiring lock: {Name:mk86d6d1803b10e0f74b7ca9be175f37419eb162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:34.138932  806109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:21:34.139342  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/kubeconfig: {Name:mkd0e7dce0fefee9d8326b7f5e1280f715df58c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:34.139547  806109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1006 14:21:34.139652  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 14:21:34.139913  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:34.139945  806109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1006 14:21:34.140026  806109 addons.go:69] Setting yakd=true in profile "addons-006450"
	I1006 14:21:34.140047  806109 addons.go:238] Setting addon yakd=true in "addons-006450"
	I1006 14:21:34.140069  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.140558  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.140784  806109 addons.go:69] Setting inspektor-gadget=true in profile "addons-006450"
	I1006 14:21:34.140802  806109 addons.go:238] Setting addon inspektor-gadget=true in "addons-006450"
	I1006 14:21:34.140825  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.141217  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.141581  806109 addons.go:69] Setting metrics-server=true in profile "addons-006450"
	I1006 14:21:34.141646  806109 addons.go:238] Setting addon metrics-server=true in "addons-006450"
	I1006 14:21:34.141685  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.142139  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.143205  806109 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-006450"
	I1006 14:21:34.143238  806109 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-006450"
	I1006 14:21:34.143270  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.143806  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.144933  806109 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-006450"
	I1006 14:21:34.144962  806109 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-006450"
	I1006 14:21:34.144997  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.145499  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.146720  806109 addons.go:69] Setting cloud-spanner=true in profile "addons-006450"
	I1006 14:21:34.146748  806109 addons.go:238] Setting addon cloud-spanner=true in "addons-006450"
	I1006 14:21:34.146777  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.147335  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.156945  806109 addons.go:69] Setting registry=true in profile "addons-006450"
	I1006 14:21:34.157043  806109 addons.go:238] Setting addon registry=true in "addons-006450"
	I1006 14:21:34.157131  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.157718  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.176071  806109 addons.go:69] Setting registry-creds=true in profile "addons-006450"
	I1006 14:21:34.176145  806109 addons.go:238] Setting addon registry-creds=true in "addons-006450"
	I1006 14:21:34.176197  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.176774  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.185281  806109 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-006450"
	I1006 14:21:34.185740  806109 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-006450"
	I1006 14:21:34.185846  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.187060  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.193152  806109 addons.go:69] Setting storage-provisioner=true in profile "addons-006450"
	I1006 14:21:34.193188  806109 addons.go:238] Setting addon storage-provisioner=true in "addons-006450"
	I1006 14:21:34.193224  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.193707  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.207765  806109 addons.go:69] Setting default-storageclass=true in profile "addons-006450"
	I1006 14:21:34.207813  806109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-006450"
	I1006 14:21:34.208233  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.208517  806109 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-006450"
	I1006 14:21:34.208563  806109 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-006450"
	I1006 14:21:34.208903  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.218653  806109 addons.go:69] Setting volcano=true in profile "addons-006450"
	I1006 14:21:34.219019  806109 addons.go:238] Setting addon volcano=true in "addons-006450"
	I1006 14:21:34.219129  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.219730  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.219851  806109 addons.go:69] Setting gcp-auth=true in profile "addons-006450"
	I1006 14:21:34.219900  806109 mustload.go:65] Loading cluster: addons-006450
	I1006 14:21:34.220156  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:34.220463  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.244567  806109 addons.go:69] Setting volumesnapshots=true in profile "addons-006450"
	I1006 14:21:34.244607  806109 addons.go:238] Setting addon volumesnapshots=true in "addons-006450"
	I1006 14:21:34.244648  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.245166  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.256667  806109 out.go:179] * Verifying Kubernetes components...
	I1006 14:21:34.256935  806109 addons.go:69] Setting ingress=true in profile "addons-006450"
	I1006 14:21:34.256960  806109 addons.go:238] Setting addon ingress=true in "addons-006450"
	I1006 14:21:34.257001  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.257557  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.285413  806109 addons.go:69] Setting ingress-dns=true in profile "addons-006450"
	I1006 14:21:34.285459  806109 addons.go:238] Setting addon ingress-dns=true in "addons-006450"
	I1006 14:21:34.285510  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.286061  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.332782  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1006 14:21:34.338069  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1006 14:21:34.338156  806109 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1006 14:21:34.338257  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.357721  806109 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1006 14:21:34.362166  806109 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 14:21:34.362235  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1006 14:21:34.362331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.380568  806109 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1006 14:21:34.383806  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1006 14:21:34.383934  806109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1006 14:21:34.384103  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.384670  806109 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1006 14:21:34.393975  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1006 14:21:34.394079  806109 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1006 14:21:34.394248  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.420035  806109 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1006 14:21:34.423442  806109 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 14:21:34.423541  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1006 14:21:34.423642  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.431543  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:34.457975  806109 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1006 14:21:34.497876  806109 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1006 14:21:34.498037  806109 out.go:179]   - Using image docker.io/registry:3.0.0
	I1006 14:21:34.510678  806109 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1006 14:21:34.519256  806109 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 14:21:34.519362  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1006 14:21:34.519521  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.526420  806109 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1006 14:21:34.526447  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1006 14:21:34.526546  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.528693  806109 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1006 14:21:34.528724  806109 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1006 14:21:34.528812  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.532917  806109 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1006 14:21:34.536266  806109 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1006 14:21:34.537209  806109 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1006 14:21:34.537230  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1006 14:21:34.537331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.542063  806109 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-006450"
	I1006 14:21:34.542107  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.542545  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.581749  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1006 14:21:34.585130  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:34.588025  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:34.590892  806109 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 14:21:34.590917  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1006 14:21:34.591008  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.605945  806109 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 14:21:34.605973  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1006 14:21:34.606041  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.626809  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.628682  806109 addons.go:238] Setting addon default-storageclass=true in "addons-006450"
	I1006 14:21:34.628721  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.629125  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.636774  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1006 14:21:34.640152  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.649003  806109 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1006 14:21:34.649626  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.656019  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1006 14:21:34.658838  806109 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1006 14:21:34.664662  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.676340  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1006 14:21:34.676611  806109 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1006 14:21:34.703838  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.723458  806109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:21:34.726631  806109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:34.726657  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:21:34.726743  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.752688  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1006 14:21:34.756756  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1006 14:21:34.760053  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1006 14:21:34.763938  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1006 14:21:34.769389  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1006 14:21:34.772287  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1006 14:21:34.772317  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1006 14:21:34.772394  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.772747  806109 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1006 14:21:34.772787  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1006 14:21:34.772862  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.804304  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.808420  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.822462  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.823147  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.867044  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.870362  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.874341  806109 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1006 14:21:34.876981  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.878063  806109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:34.878079  806109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:21:34.878140  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.888089  806109 out.go:179]   - Using image docker.io/busybox:stable
	I1006 14:21:34.891239  806109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 14:21:34.891265  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1006 14:21:34.891331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.920306  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.945324  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 14:21:34.947994  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	W1006 14:21:34.970150  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:34.970251  806109 retry.go:31] will retry after 147.40402ms: ssh: handshake failed: EOF
	W1006 14:21:34.972537  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:34.972566  806109 retry.go:31] will retry after 281.687683ms: ssh: handshake failed: EOF
	I1006 14:21:34.975793  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:35.005444  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	W1006 14:21:35.009771  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.009812  806109 retry.go:31] will retry after 207.774831ms: ssh: handshake failed: EOF
	I1006 14:21:35.012483  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:35.127149  806109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1006 14:21:35.219409  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.219491  806109 retry.go:31] will retry after 414.252414ms: ssh: handshake failed: EOF
	W1006 14:21:35.255517  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.255595  806109 retry.go:31] will retry after 378.429324ms: ssh: handshake failed: EOF
	I1006 14:21:35.851743  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 14:21:35.853206  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 14:21:35.989160  806109 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:35.989181  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1006 14:21:36.111352  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1006 14:21:36.151070  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1006 14:21:36.151165  806109 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1006 14:21:36.192781  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1006 14:21:36.192855  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1006 14:21:36.226627  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1006 14:21:36.226690  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1006 14:21:36.243375  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 14:21:36.255630  806109 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1006 14:21:36.255746  806109 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1006 14:21:36.350477  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1006 14:21:36.350562  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1006 14:21:36.377661  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 14:21:36.396057  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:36.399305  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:36.426714  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1006 14:21:36.426796  806109 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1006 14:21:36.427640  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:36.435627  806109 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1006 14:21:36.435647  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1006 14:21:36.443471  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 14:21:36.479083  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 14:21:36.481831  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1006 14:21:36.481904  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1006 14:21:36.527849  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1006 14:21:36.527927  806109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1006 14:21:36.537515  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1006 14:21:36.537591  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1006 14:21:36.597935  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1006 14:21:36.598000  806109 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1006 14:21:36.601149  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1006 14:21:36.790553  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 14:21:36.790647  806109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1006 14:21:36.821053  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1006 14:21:36.821135  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1006 14:21:36.867220  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1006 14:21:36.871426  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1006 14:21:36.871504  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1006 14:21:36.880338  806109 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.753102328s)
	I1006 14:21:36.880515  806109 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.935150087s)
	I1006 14:21:36.880679  806109 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1006 14:21:36.881380  806109 node_ready.go:35] waiting up to 6m0s for node "addons-006450" to be "Ready" ...
	I1006 14:21:36.887470  806109 node_ready.go:49] node "addons-006450" is "Ready"
	I1006 14:21:36.887509  806109 node_ready.go:38] duration metric: took 6.110221ms for node "addons-006450" to be "Ready" ...
	I1006 14:21:36.887526  806109 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:21:36.887614  806109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:21:36.891551  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 14:21:37.041224  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1006 14:21:37.041263  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1006 14:21:37.185540  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1006 14:21:37.185582  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1006 14:21:37.245756  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1006 14:21:37.245794  806109 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1006 14:21:37.320678  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1006 14:21:37.384934  806109 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-006450" context rescaled to 1 replicas
	I1006 14:21:37.439254  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1006 14:21:37.439280  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1006 14:21:37.491833  806109 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:37.491853  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1006 14:21:37.710140  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.858315722s)
	I1006 14:21:37.710258  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.856978431s)
	I1006 14:21:37.797019  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1006 14:21:37.797087  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1006 14:21:38.055462  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.944020191s)
	I1006 14:21:38.066071  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:38.209415  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1006 14:21:38.209495  806109 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1006 14:21:38.308015  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1006 14:21:38.308047  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1006 14:21:38.731766  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1006 14:21:38.731811  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1006 14:21:38.884673  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 14:21:38.884702  806109 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1006 14:21:39.201324  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 14:21:42.056707  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1006 14:21:42.056850  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:42.096992  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:43.527695  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.284260443s)
	I1006 14:21:43.527736  806109 addons.go:479] Verifying addon ingress=true in "addons-006450"
	I1006 14:21:43.527908  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.150170305s)
	I1006 14:21:43.528008  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.131874449s)
	W1006 14:21:43.528029  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:43.528050  806109 retry.go:31] will retry after 227.873764ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:43.528137  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.128758076s)
	I1006 14:21:43.528185  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.100376481s)
	I1006 14:21:43.528469  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.084972148s)
	I1006 14:21:43.528566  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.04940419s)
	I1006 14:21:43.528706  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.927477657s)
	I1006 14:21:43.528726  806109 addons.go:479] Verifying addon registry=true in "addons-006450"
	I1006 14:21:43.532546  806109 out.go:179] * Verifying ingress addon...
	I1006 14:21:43.534069  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1006 14:21:43.534935  806109 out.go:179] * Verifying registry addon...
	I1006 14:21:43.537759  806109 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1006 14:21:43.540886  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1006 14:21:43.565742  806109 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1006 14:21:43.565781  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:43.568676  806109 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 14:21:43.568708  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1006 14:21:43.576208  806109 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1006 14:21:43.749034  806109 addons.go:238] Setting addon gcp-auth=true in "addons-006450"
	I1006 14:21:43.749121  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:43.749685  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:43.756132  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:43.787457  806109 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1006 14:21:43.787548  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:43.815805  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:44.114671  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:44.115253  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:44.548438  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:44.550543  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:45.046803  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:45.049237  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:45.581293  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:45.581847  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.153351  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:46.153798  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.640887  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.643861  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.081245  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.081634  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:47.568674  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.569175  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.056720  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:48.057131  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.585162  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.717857623s)
	I1006 14:21:48.585271  806109 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (11.697643759s)
	I1006 14:21:48.585318  806109 api_server.go:72] duration metric: took 14.445740723s to wait for apiserver process to appear ...
	I1006 14:21:48.585343  806109 api_server.go:88] waiting for apiserver healthz status ...
	I1006 14:21:48.585375  806109 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1006 14:21:48.585803  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.694205832s)
	I1006 14:21:48.585856  806109 addons.go:479] Verifying addon metrics-server=true in "addons-006450"
	I1006 14:21:48.585929  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.265223311s)
	I1006 14:21:48.586329  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.520142743s)
	W1006 14:21:48.586371  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 14:21:48.586391  806109 retry.go:31] will retry after 354.82385ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 14:21:48.586570  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.385202699s)
	I1006 14:21:48.586585  806109 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-006450"
	I1006 14:21:48.590422  806109 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-006450 service yakd-dashboard -n yakd-dashboard
	
	I1006 14:21:48.592576  806109 out.go:179] * Verifying csi-hostpath-driver addon...
	I1006 14:21:48.597670  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1006 14:21:48.614206  806109 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1006 14:21:48.647358  806109 api_server.go:141] control plane version: v1.34.1
	I1006 14:21:48.647389  806109 api_server.go:131] duration metric: took 62.022744ms to wait for apiserver health ...
	I1006 14:21:48.647399  806109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 14:21:48.648507  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:48.648899  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.690542  806109 system_pods.go:59] 19 kube-system pods found
	I1006 14:21:48.690881  806109 system_pods.go:61] "coredns-66bc5c9577-5b26c" [b2fadab4-223c-4127-ae78-2734411d72b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:21:48.690920  806109 system_pods.go:61] "coredns-66bc5c9577-z6nm4" [7fc2de03-9a40-4426-8af4-1216ed30bad3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1006 14:21:48.690960  806109 system_pods.go:61] "csi-hostpath-attacher-0" [f5fb1d05-3f2a-4b8a-b2ed-df5688d53301] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 14:21:48.690990  806109 system_pods.go:61] "csi-hostpath-resizer-0" [03b524e2-88a1-4c1c-9014-8b60efd178c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 14:21:48.691016  806109 system_pods.go:61] "csi-hostpathplugin-jdxpx" [dee0a0f1-55fc-4b8c-8e11-deef46bcb09b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 14:21:48.691053  806109 system_pods.go:61] "etcd-addons-006450" [68d8971d-a245-46ed-aeea-b6c95eaaa5a1] Running
	I1006 14:21:48.691073  806109 system_pods.go:61] "kube-apiserver-addons-006450" [859fa9a9-9411-46dc-a7a4-6f90f229bcb7] Running
	I1006 14:21:48.691092  806109 system_pods.go:61] "kube-controller-manager-addons-006450" [de781030-92f3-4acc-81f4-6ea4d01e03a7] Running
	I1006 14:21:48.691138  806109 system_pods.go:61] "kube-ingress-dns-minikube" [ed71a121-1938-4fcd-98ba-91506484a2ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 14:21:48.691163  806109 system_pods.go:61] "kube-proxy-rr8rw" [081a658a-cae9-4fff-a7ca-ec779b247fb7] Running
	I1006 14:21:48.691184  806109 system_pods.go:61] "kube-scheduler-addons-006450" [74dbd2fb-a5c2-463a-b49f-0d6b7ab88301] Running
	I1006 14:21:48.691218  806109 system_pods.go:61] "metrics-server-85b7d694d7-s77t8" [4560d030-72e2-4fed-b2fb-5a3edfe4178c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 14:21:48.691244  806109 system_pods.go:61] "nvidia-device-plugin-daemonset-d29s2" [0c163bb6-be86-4968-b8c1-96839618f3ac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 14:21:48.691266  806109 system_pods.go:61] "registry-66898fdd98-btgr2" [90bfa3d6-9f89-4227-b3ef-d98d9fadd197] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 14:21:48.691302  806109 system_pods.go:61] "registry-creds-764b6fb674-gxwfl" [a8521a0d-ed5a-452c-9fe0-94e6798668f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 14:21:48.691330  806109 system_pods.go:61] "registry-proxy-wd7b6" [638a84e3-6fae-4413-aa77-31014a85ff29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 14:21:48.691354  806109 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6bdv2" [4cd0ea0b-af7f-46f8-bd9b-8082dfd0fba4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.691391  806109 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8stqh" [d3201aa7-7b51-4180-abc6-274d440ee6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.691417  806109 system_pods.go:61] "storage-provisioner" [8e59991a-c6eb-407e-bacd-d535ad3d89b9] Running
	I1006 14:21:48.691437  806109 system_pods.go:74] duration metric: took 44.032107ms to wait for pod list to return data ...
	I1006 14:21:48.691473  806109 default_sa.go:34] waiting for default service account to be created ...
	I1006 14:21:48.690844  806109 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 14:21:48.691711  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:48.780129  806109 default_sa.go:45] found service account: "default"
	I1006 14:21:48.780207  806109 default_sa.go:55] duration metric: took 88.709889ms for default service account to be created ...
	I1006 14:21:48.780231  806109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 14:21:48.888790  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.132593822s)
	W1006 14:21:48.888876  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:48.888908  806109 retry.go:31] will retry after 467.080472ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:48.888970  806109 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.101487907s)
	I1006 14:21:48.892596  806109 system_pods.go:86] 19 kube-system pods found
	I1006 14:21:48.892682  806109 system_pods.go:89] "coredns-66bc5c9577-5b26c" [b2fadab4-223c-4127-ae78-2734411d72b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:21:48.892707  806109 system_pods.go:89] "coredns-66bc5c9577-z6nm4" [7fc2de03-9a40-4426-8af4-1216ed30bad3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1006 14:21:48.892729  806109 system_pods.go:89] "csi-hostpath-attacher-0" [f5fb1d05-3f2a-4b8a-b2ed-df5688d53301] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 14:21:48.892769  806109 system_pods.go:89] "csi-hostpath-resizer-0" [03b524e2-88a1-4c1c-9014-8b60efd178c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 14:21:48.892792  806109 system_pods.go:89] "csi-hostpathplugin-jdxpx" [dee0a0f1-55fc-4b8c-8e11-deef46bcb09b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 14:21:48.892812  806109 system_pods.go:89] "etcd-addons-006450" [68d8971d-a245-46ed-aeea-b6c95eaaa5a1] Running
	I1006 14:21:48.892844  806109 system_pods.go:89] "kube-apiserver-addons-006450" [859fa9a9-9411-46dc-a7a4-6f90f229bcb7] Running
	I1006 14:21:48.892868  806109 system_pods.go:89] "kube-controller-manager-addons-006450" [de781030-92f3-4acc-81f4-6ea4d01e03a7] Running
	I1006 14:21:48.892892  806109 system_pods.go:89] "kube-ingress-dns-minikube" [ed71a121-1938-4fcd-98ba-91506484a2ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 14:21:48.892925  806109 system_pods.go:89] "kube-proxy-rr8rw" [081a658a-cae9-4fff-a7ca-ec779b247fb7] Running
	I1006 14:21:48.892962  806109 system_pods.go:89] "kube-scheduler-addons-006450" [74dbd2fb-a5c2-463a-b49f-0d6b7ab88301] Running
	I1006 14:21:48.892984  806109 system_pods.go:89] "metrics-server-85b7d694d7-s77t8" [4560d030-72e2-4fed-b2fb-5a3edfe4178c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 14:21:48.893021  806109 system_pods.go:89] "nvidia-device-plugin-daemonset-d29s2" [0c163bb6-be86-4968-b8c1-96839618f3ac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 14:21:48.893045  806109 system_pods.go:89] "registry-66898fdd98-btgr2" [90bfa3d6-9f89-4227-b3ef-d98d9fadd197] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 14:21:48.893080  806109 system_pods.go:89] "registry-creds-764b6fb674-gxwfl" [a8521a0d-ed5a-452c-9fe0-94e6798668f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 14:21:48.893105  806109 system_pods.go:89] "registry-proxy-wd7b6" [638a84e3-6fae-4413-aa77-31014a85ff29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 14:21:48.893126  806109 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6bdv2" [4cd0ea0b-af7f-46f8-bd9b-8082dfd0fba4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.893161  806109 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8stqh" [d3201aa7-7b51-4180-abc6-274d440ee6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.893183  806109 system_pods.go:89] "storage-provisioner" [8e59991a-c6eb-407e-bacd-d535ad3d89b9] Running
	I1006 14:21:48.893204  806109 system_pods.go:126] duration metric: took 112.954104ms to wait for k8s-apps to be running ...
	I1006 14:21:48.893238  806109 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 14:21:48.893331  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:21:48.893436  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:48.897290  806109 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1006 14:21:48.900672  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1006 14:21:48.900752  806109 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1006 14:21:48.942085  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:48.960118  806109 system_svc.go:56] duration metric: took 66.871905ms WaitForService to wait for kubelet
	I1006 14:21:48.960199  806109 kubeadm.go:586] duration metric: took 14.820620987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:21:48.960231  806109 node_conditions.go:102] verifying NodePressure condition ...
	I1006 14:21:48.965554  806109 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 14:21:48.965640  806109 node_conditions.go:123] node cpu capacity is 2
	I1006 14:21:48.965667  806109 node_conditions.go:105] duration metric: took 5.41607ms to run NodePressure ...
	I1006 14:21:48.965693  806109 start.go:241] waiting for startup goroutines ...
	I1006 14:21:48.984429  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1006 14:21:48.984493  806109 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1006 14:21:49.062891  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:49.063409  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:49.102274  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:49.109468  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 14:21:49.109495  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1006 14:21:49.163209  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 14:21:49.357126  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:49.543241  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:49.545480  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:49.602876  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:50.041860  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:50.044347  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:50.102201  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:50.541424  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:50.543788  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:50.625651  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.006456  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.064277984s)
	I1006 14:21:51.006543  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.84331281s)
	I1006 14:21:51.010142  806109 addons.go:479] Verifying addon gcp-auth=true in "addons-006450"
	I1006 14:21:51.025044  806109 out.go:179] * Verifying gcp-auth addon...
	I1006 14:21:51.032841  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1006 14:21:51.036529  806109 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1006 14:21:51.036555  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:51.042265  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:51.044526  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:51.102619  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.536647  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:51.544904  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:51.545440  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:51.602200  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.864284  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.507114739s)
	W1006 14:21:51.864377  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.864433  806109 retry.go:31] will retry after 615.286821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.037094  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:52.041054  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:52.043625  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:52.101572  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:52.479941  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:52.536478  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:52.541425  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:52.543774  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:52.600990  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:53.035872  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:53.041098  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:53.043636  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:53.101845  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:53.536239  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:53.536598  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.05658149s)
	W1006 14:21:53.536657  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.536695  806109 retry.go:31] will retry after 1.187113289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.541601  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:53.543552  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:53.602095  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.037487  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:54.042200  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:54.045343  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:54.102498  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.537542  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:54.542167  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:54.544351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:54.602290  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.724667  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:55.036372  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:55.043120  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:55.044769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:55.101792  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:55.536221  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:55.541111  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:55.543457  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:55.601561  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:55.840769  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.116063398s)
	W1006 14:21:55.840813  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:55.840833  806109 retry.go:31] will retry after 947.610718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.036387  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:56.043063  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:56.044685  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:56.101635  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:56.536456  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:56.541501  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:56.543585  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:56.601983  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:56.789245  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:57.036659  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:57.042057  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:57.044676  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:57.102243  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:57.537164  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:57.543103  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:57.544004  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:57.601850  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:57.839191  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.049904578s)
	W1006 14:21:57.839238  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:57.839258  806109 retry.go:31] will retry after 1.03292313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.037616  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:58.041961  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:58.044496  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:58.107912  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:58.536745  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:58.540665  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:58.544634  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:58.601133  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:58.872574  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:59.036224  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:59.041408  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:59.044098  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:59.101370  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:59.536626  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:59.542541  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:59.543654  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:59.601836  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:59.922791  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.050177986s)
	W1006 14:21:59.922823  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:59.922842  806109 retry.go:31] will retry after 2.488598562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:00.043764  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:00.064604  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:00.065064  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:00.129394  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:00.537107  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:00.541010  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:00.543818  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:00.628309  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:01.036861  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:01.043610  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:01.046494  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:01.102249  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:01.537399  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:01.541534  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:01.543844  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:01.601153  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:02.038594  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:02.041768  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:02.044895  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:02.102517  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:02.411855  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:02.535770  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:02.540865  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:02.544524  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:02.601881  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.036514  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:03.041497  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:03.043732  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:03.101053  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.551361  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:03.551723  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:03.552096  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:03.607741  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.821574  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.409680153s)
	W1006 14:22:03.821607  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:03.821626  806109 retry.go:31] will retry after 2.808613429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:04.036608  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:04.042059  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:04.044591  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:04.102238  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:04.537121  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:04.541031  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:04.544043  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:04.638355  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:05.045826  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:05.045915  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:05.046027  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:05.103126  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:05.536935  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:05.541096  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:05.543811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:05.601370  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.037342  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:06.048770  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:06.049575  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:06.102090  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.537158  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:06.541167  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:06.544718  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:06.601939  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.631301  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:07.036903  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:07.041275  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:07.046171  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:07.101990  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:07.537306  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:07.542954  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:07.548030  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:07.602151  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:08.038923  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:08.045713  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:08.048165  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:08.138614  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:08.453750  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.822414187s)
	W1006 14:22:08.453835  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.453869  806109 retry.go:31] will retry after 8.425837281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.536134  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:08.541309  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:08.543203  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:08.601173  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:09.037059  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:09.041277  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:09.043958  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:09.106411  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:09.536191  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:09.540957  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:09.543212  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:09.637335  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:10.038746  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:10.041203  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:10.043968  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:10.101414  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:10.535919  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:10.541593  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:10.544180  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:10.601144  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:11.036181  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:11.041258  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:11.043931  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:11.102062  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:11.536161  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:11.541576  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:11.545106  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:11.601994  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:12.037286  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:12.041743  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:12.043857  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:12.101936  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:12.536252  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:12.542977  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:12.544737  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:12.602418  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:13.037636  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:13.043353  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:13.045541  806109 kapi.go:107] duration metric: took 29.504656348s to wait for kubernetes.io/minikube-addons=registry ...
	I1006 14:22:13.103856  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:13.536010  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:13.541542  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:13.602453  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:14.036799  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:14.041118  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:14.101847  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:14.535955  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:14.540895  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:14.601210  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:15.038047  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:15.042436  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:15.101780  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:15.536551  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:15.541754  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:15.601384  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.036266  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:16.041349  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:16.101883  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.535728  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:16.540993  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:16.601091  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.880118  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:17.036213  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:17.041368  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:17.102032  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:17.536149  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:17.541821  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:17.606226  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:18.037103  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:18.041146  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:18.102447  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:18.125066  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.244891148s)
	W1006 14:22:18.125106  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:18.125137  806109 retry.go:31] will retry after 8.394227584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:18.536459  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:18.541489  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:18.602140  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:19.036341  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:19.041843  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:19.101573  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:19.536129  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:19.541594  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:19.603991  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:20.036705  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:20.040761  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:20.101466  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:20.536346  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:20.541417  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:20.602109  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:21.037009  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:21.042008  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:21.103192  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:21.536872  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:21.545192  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:21.603991  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:22.036447  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:22.041450  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:22.101387  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:22.537530  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:22.547087  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:22.602381  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:23.038711  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:23.047024  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:23.102246  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:23.537465  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:23.542053  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:23.602575  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:24.037716  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:24.041932  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:24.105425  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:24.537009  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:24.540996  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:24.601164  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:25.037218  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:25.041462  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:25.101898  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:25.541274  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:25.541617  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:25.601533  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:26.037202  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:26.041027  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:26.101243  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:26.520530  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:26.537318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:26.541434  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:26.602288  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.036799  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:27.040735  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:27.101318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.536660  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:27.540656  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:27.601312  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.622677  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.102107139s)
	W1006 14:22:27.622764  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.622799  806109 retry.go:31] will retry after 8.964562377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:28.036352  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:28.041655  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:28.101317  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:28.536873  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:28.542495  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:28.601848  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:29.037235  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:29.041321  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:29.101529  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:29.536608  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:29.541988  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:29.601332  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:30.067966  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:30.069628  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:30.102287  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:30.537456  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:30.541607  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:30.605527  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:31.047144  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:31.047366  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:31.102811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:31.540586  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:31.543600  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:31.601318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:32.037351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:32.041560  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:32.101712  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:32.537074  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:32.541459  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:32.637575  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:33.037645  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:33.041762  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:33.101769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:33.537080  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:33.546252  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:33.602460  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:34.049083  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:34.059194  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:34.102644  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:34.536345  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:34.541231  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:34.602566  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:35.036474  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:35.041683  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:35.101153  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:35.536516  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:35.543131  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:35.601301  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:36.040029  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:36.041789  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:36.101554  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:36.536713  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:36.541523  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:36.587821  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:36.637573  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:37.036522  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:37.042208  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:37.101356  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:37.538450  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:37.541912  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:37.601423  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:38.039073  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:38.041963  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:38.107975  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:38.260560  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.672700487s)
	W1006 14:22:38.260650  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:38.260684  806109 retry.go:31] will retry after 28.502029632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:38.537841  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:38.541302  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:38.634080  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:39.042819  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:39.044710  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:39.101819  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:39.536317  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:39.541291  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:39.602171  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:40.063837  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:40.065152  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:40.160263  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:40.536517  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:40.541760  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:40.601589  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:41.035811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:41.040992  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:41.101764  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:41.537386  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:41.541696  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:41.638626  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:42.041509  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:42.042425  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:42.102420  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:42.536866  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:42.540382  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:42.602008  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:43.036485  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:43.041855  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:43.104569  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:43.537538  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:43.541564  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:43.603912  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:44.036751  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:44.041644  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:44.100816  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:44.535598  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:44.540901  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:44.605465  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:45.067085  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:45.085831  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:45.104001  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:45.535733  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:45.541994  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:45.601937  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:46.037039  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:46.042662  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:46.100769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:46.538350  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:46.542984  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:46.601745  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:47.036231  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:47.041572  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:47.101597  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:47.537411  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:47.541447  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:47.601925  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:48.036062  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:48.046387  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:48.106511  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:48.535973  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:48.541411  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:48.602406  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:49.082967  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:49.083089  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:49.101404  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:49.543349  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:49.543936  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:49.606022  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:50.052841  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:50.053282  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:50.101918  806109 kapi.go:107] duration metric: took 1m1.504246684s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1006 14:22:50.536780  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:50.540713  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:51.039833  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:51.041873  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:51.536470  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:51.541280  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:52.036677  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:52.041641  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:52.536085  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:52.540908  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:53.036694  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:53.041925  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:53.536756  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:53.541339  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:54.036706  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:54.041617  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:54.536485  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:54.541468  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:55.054778  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:55.076569  806109 kapi.go:107] duration metric: took 1m11.538807076s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1006 14:22:55.536329  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:56.036624  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:56.535976  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:57.036354  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:57.536109  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:58.037892  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:58.536442  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:59.037351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:59.536233  806109 kapi.go:107] duration metric: took 1m8.503389262s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1006 14:22:59.539324  806109 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-006450 cluster.
	I1006 14:22:59.542088  806109 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1006 14:22:59.544863  806109 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1006 14:23:06.763823  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 14:23:07.625986  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:23:07.626019  806109 retry.go:31] will retry after 17.722294339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:23:25.349291  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 14:23:26.187865  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:26.187971  806109 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:26.191145  806109 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, volcano, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1006 14:23:26.193747  806109 addons.go:514] duration metric: took 1m52.052915825s for enable addons: enabled=[registry-creds amd-gpu-device-plugin cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher volcano metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1006 14:23:26.193810  806109 start.go:246] waiting for cluster config update ...
	I1006 14:23:26.193839  806109 start.go:255] writing updated cluster config ...
	I1006 14:23:26.194174  806109 ssh_runner.go:195] Run: rm -f paused
	I1006 14:23:26.198700  806109 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:23:26.203281  806109 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5b26c" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.213859  806109 pod_ready.go:94] pod "coredns-66bc5c9577-5b26c" is "Ready"
	I1006 14:23:26.213893  806109 pod_ready.go:86] duration metric: took 10.577014ms for pod "coredns-66bc5c9577-5b26c" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.216571  806109 pod_ready.go:83] waiting for pod "etcd-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.223509  806109 pod_ready.go:94] pod "etcd-addons-006450" is "Ready"
	I1006 14:23:26.223539  806109 pod_ready.go:86] duration metric: took 6.938313ms for pod "etcd-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.226276  806109 pod_ready.go:83] waiting for pod "kube-apiserver-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.230877  806109 pod_ready.go:94] pod "kube-apiserver-addons-006450" is "Ready"
	I1006 14:23:26.230912  806109 pod_ready.go:86] duration metric: took 4.607653ms for pod "kube-apiserver-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.233246  806109 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.603009  806109 pod_ready.go:94] pod "kube-controller-manager-addons-006450" is "Ready"
	I1006 14:23:26.603041  806109 pod_ready.go:86] duration metric: took 369.767385ms for pod "kube-controller-manager-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.803580  806109 pod_ready.go:83] waiting for pod "kube-proxy-rr8rw" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.202844  806109 pod_ready.go:94] pod "kube-proxy-rr8rw" is "Ready"
	I1006 14:23:27.202872  806109 pod_ready.go:86] duration metric: took 399.265658ms for pod "kube-proxy-rr8rw" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.402987  806109 pod_ready.go:83] waiting for pod "kube-scheduler-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.803050  806109 pod_ready.go:94] pod "kube-scheduler-addons-006450" is "Ready"
	I1006 14:23:27.803077  806109 pod_ready.go:86] duration metric: took 400.059334ms for pod "kube-scheduler-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.803090  806109 pod_ready.go:40] duration metric: took 1.604355795s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:23:27.868687  806109 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 14:23:27.871326  806109 out.go:179] * Done! kubectl is now configured to use "addons-006450" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 06 14:32:45 addons-006450 dockerd[1123]: time="2025-10-06T14:32:45.533156300Z" level=info msg="ignoring event" container=bc87101a8ed619eb9848c4561b086171c5af4bb30475ca39ef6743a2f163c5d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:32:47 addons-006450 dockerd[1123]: time="2025-10-06T14:32:47.656014922Z" level=info msg="ignoring event" container=1f56749caece00c0ea43fbde19476bacbdddba2686294e230c2e70aa07e16348 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:32:48 addons-006450 dockerd[1123]: time="2025-10-06T14:32:48.822789536Z" level=info msg="ignoring event" container=75c19a63dce9ba70894b545dc95092a9f612e208826047ffeddfa8b9d387cf44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:32:48 addons-006450 dockerd[1123]: time="2025-10-06T14:32:48.865326960Z" level=info msg="ignoring event" container=385b41735590f345133a5461d3d7985844fe448ff905bc09574be7b8e94ea61b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:32:49 addons-006450 dockerd[1123]: time="2025-10-06T14:32:49.172712723Z" level=info msg="ignoring event" container=19e88fa72d39f97f76b5c7b5c1ad73e27fcaae060d488ec3af8bcb52fcf0d3fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:32:49 addons-006450 dockerd[1123]: time="2025-10-06T14:32:49.209400418Z" level=info msg="ignoring event" container=8b6806e8a2031953fc97410816cae42b626b498a39197e07cac791b4861fa14d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:32:56 addons-006450 dockerd[1123]: time="2025-10-06T14:32:56.692635665Z" level=info msg="ignoring event" container=be40b7b342bcc18e335144a5b5d19e29b26ec3632ddef38f82c9a2a537fe05dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:32:56 addons-006450 dockerd[1123]: time="2025-10-06T14:32:56.817407077Z" level=info msg="ignoring event" container=9be9dc44a48eeb37979340d0ce5ac05d321c9293d70b4de8bb22c4370bcf88ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:33:03 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:33:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/67a174e8ac6b95c90ea733f5a56b2b8e900f20e80e5c535b9c0b67658f895818/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:33:03 addons-006450 dockerd[1123]: time="2025-10-06T14:33:03.409368693Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:33:09 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:33:09Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5800b93b27c475015dc07d683ae173b707406685cd46f248c925a7233d92d217/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:33:10 addons-006450 dockerd[1123]: time="2025-10-06T14:33:10.017287133Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:33:16 addons-006450 dockerd[1123]: time="2025-10-06T14:33:16.980725431Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:33:20 addons-006450 dockerd[1123]: time="2025-10-06T14:33:20.981239801Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:33:43 addons-006450 dockerd[1123]: time="2025-10-06T14:33:43.104243299Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:33:43 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:33:43Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 06 14:33:49 addons-006450 dockerd[1123]: time="2025-10-06T14:33:49.992377195Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:34:38 addons-006450 dockerd[1123]: time="2025-10-06T14:34:38.010158992Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:34:40 addons-006450 dockerd[1123]: time="2025-10-06T14:34:40.980041827Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:36:09 addons-006450 dockerd[1123]: time="2025-10-06T14:36:09.178767140Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:36:09 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:36:09Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 06 14:36:12 addons-006450 dockerd[1123]: time="2025-10-06T14:36:12.993913483Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:38:54 addons-006450 dockerd[1123]: time="2025-10-06T14:38:54.060476000Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:38:54 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:38:54Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 06 14:39:04 addons-006450 dockerd[1123]: time="2025-10-06T14:39:04.963899618Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	ffe6a9017df48       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   311174277f416       busybox                                     default
	f2a47081481dc       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             16 minutes ago      Running             controller                               0                   bd5557adaf3c6       ingress-nginx-controller-675c5ddd98-k4m4k   ingress-nginx
	e400809cac569       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          16 minutes ago      Running             csi-snapshotter                          0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	9c0d6f72f1f92       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          16 minutes ago      Running             csi-provisioner                          0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	4876f3a9c229a       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            16 minutes ago      Running             liveness-probe                           0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	7f7bdac7cf59b       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           16 minutes ago      Running             hostpath                                 0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	4eeec494d7d9f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                16 minutes ago      Running             node-driver-registrar                    0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	6f25a4d6caf64       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   16 minutes ago      Running             csi-external-health-monitor-controller   0                   70205a118c52f       csi-hostpathplugin-jdxpx                    kube-system
	0201aae6c64e0       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              16 minutes ago      Running             csi-resizer                              0                   0ca9bd27ecd5a       csi-hostpath-resizer-0                      kube-system
	9e27aa581454c       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             16 minutes ago      Running             csi-attacher                             0                   1a820fa8b56fd       csi-hostpath-attacher-0                     kube-system
	05cbc48ffb51e       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      16 minutes ago      Running             volume-snapshot-controller               0                   4dcf8198ace65       snapshot-controller-7d9fbc56b8-6bdv2        kube-system
	2e1fd961dc8a7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      16 minutes ago      Running             volume-snapshot-controller               0                   47610a948360b       snapshot-controller-7d9fbc56b8-8stqh        kube-system
	02001c5bf8ca9       9a80c0c8eb61c                                                                                                                                16 minutes ago      Exited              patch                                    1                   67b15011fa29d       ingress-nginx-admission-patch-s6s8k         ingress-nginx
	11587ae8b0259       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   16 minutes ago      Exited              create                                   0                   6d0aa0c7acb77       ingress-nginx-admission-create-t2tnf        ingress-nginx
	509e7623ba228       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5                            16 minutes ago      Running             gadget                                   0                   14032f9fa6ab7       gadget-mwfpm                                gadget
	d3025a0e45236       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        16 minutes ago      Running             yakd                                     0                   510624cc4af1e       yakd-dashboard-5ff678cb9-nfj9q              yakd-dashboard
	0244185030bd7       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                                         16 minutes ago      Running             minikube-ingress-dns                     0                   5fb11b5433718       kube-ingress-dns-minikube                   kube-system
	7c848b41913dc       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       17 minutes ago      Running             local-path-provisioner                   0                   dd0d4f86343b0       local-path-provisioner-648f6765c9-fmrx9     local-path-storage
	8cf5351cc4642       gcr.io/cloud-spanner-emulator/emulator@sha256:15030dbba87c4fba50265cc80e62278eb41925d24d3a54c30563eff06304bf58                               17 minutes ago      Running             cloud-spanner-emulator                   0                   4916510c10c2b       cloud-spanner-emulator-85f6b7fc65-zjsh8     default
	aa8b68706bef2       nvcr.io/nvidia/k8s-device-plugin@sha256:3c54348fe5a57e5700e7d8068e7531d2ef2d5f3ccb70c8f6bac0953432527abd                                     17 minutes ago      Running             nvidia-device-plugin-ctr                 0                   48071d8f52e3b       nvidia-device-plugin-daemonset-d29s2        kube-system
	59bd3def26ae0       ba04bb24b9575                                                                                                                                17 minutes ago      Running             storage-provisioner                      0                   a23e97739eb30       storage-provisioner                         kube-system
	1f08a0b17053c       138784d87c9c5                                                                                                                                17 minutes ago      Running             coredns                                  0                   41c06ea8e8dab       coredns-66bc5c9577-5b26c                    kube-system
	2c89530d2d498       05baa95f5142d                                                                                                                                17 minutes ago      Running             kube-proxy                               0                   3401ff6190b48       kube-proxy-rr8rw                            kube-system
	9184b772f37f1       7eb2c6ff0c5a7                                                                                                                                17 minutes ago      Running             kube-controller-manager                  0                   431c21e60ec20       kube-controller-manager-addons-006450       kube-system
	16d61d5012e7c       b5f57ec6b9867                                                                                                                                17 minutes ago      Running             kube-scheduler                           0                   a52e4c8396f58       kube-scheduler-addons-006450                kube-system
	e5031a852e78a       43911e833d64d                                                                                                                                17 minutes ago      Running             kube-apiserver                           0                   dc93b2d9f3eda       kube-apiserver-addons-006450                kube-system
	57ec1a2227a7f       a1894772a478e                                                                                                                                17 minutes ago      Running             etcd                                     0                   31b1c12560e88       etcd-addons-006450                          kube-system
	
	
	==> controller_ingress [f2a47081481d] <==
	I1006 14:22:56.378131       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1006 14:22:56.378531       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1006 14:22:56.385446       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1006 14:22:56.386528       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-675c5ddd98-k4m4k"
	I1006 14:22:56.394070       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-k4m4k" node="addons-006450"
	I1006 14:22:56.402954       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-k4m4k" node="addons-006450"
	I1006 14:22:56.426765       7 controller.go:228] "Backend successfully reloaded"
	I1006 14:22:56.426836       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I1006 14:22:56.427067       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-k4m4k", UID:"7af3dc46-9579-4103-920a-676be59d642a", APIVersion:"v1", ResourceVersion:"1329", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1006 14:33:02.321357       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1006 14:33:02.322712       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I1006 14:33:02.330136       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W1006 14:33:02.330482       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1006 14:33:02.331181       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1006 14:33:02.331427       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"422cb3a2-2f49-4a83-8c3d-5e3e2b23e211", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2696", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I1006 14:33:02.410927       7 controller.go:228] "Backend successfully reloaded"
	I1006 14:33:02.411811       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-k4m4k", UID:"7af3dc46-9579-4103-920a-676be59d642a", APIVersion:"v1", ResourceVersion:"1329", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1006 14:33:05.664002       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1006 14:33:05.664777       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1006 14:33:05.705445       7 controller.go:228] "Backend successfully reloaded"
	I1006 14:33:05.706043       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-k4m4k", UID:"7af3dc46-9579-4103-920a-676be59d642a", APIVersion:"v1", ResourceVersion:"1329", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1006 14:33:08.997892       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1006 14:33:56.393716       7 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	W1006 14:33:56.398162       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1006 14:33:56.398258       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"422cb3a2-2f49-4a83-8c3d-5e3e2b23e211", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2835", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	
	
	==> coredns [1f08a0b17053] <==
	[INFO] 10.244.0.7:56542 - 33336 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002392707s
	[INFO] 10.244.0.7:56542 - 54232 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000178786s
	[INFO] 10.244.0.7:56542 - 7333 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000136449s
	[INFO] 10.244.0.7:33056 - 46078 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000279019s
	[INFO] 10.244.0.7:33056 - 46299 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000298719s
	[INFO] 10.244.0.7:56424 - 24690 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0002456s
	[INFO] 10.244.0.7:56424 - 24468 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000268837s
	[INFO] 10.244.0.7:59046 - 6419 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000205798s
	[INFO] 10.244.0.7:59046 - 6231 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000164198s
	[INFO] 10.244.0.7:57987 - 61663 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001803179s
	[INFO] 10.244.0.7:57987 - 61843 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002072492s
	[INFO] 10.244.0.7:52614 - 11017 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000243541s
	[INFO] 10.244.0.7:52614 - 10853 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000192292s
	[INFO] 10.244.0.26:44951 - 63731 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000272135s
	[INFO] 10.244.0.26:43415 - 16328 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000118021s
	[INFO] 10.244.0.26:39889 - 25486 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139116s
	[INFO] 10.244.0.26:39105 - 18081 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154197s
	[INFO] 10.244.0.26:56273 - 11862 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000274474s
	[INFO] 10.244.0.26:44777 - 21446 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000313833s
	[INFO] 10.244.0.26:47488 - 37580 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00207181s
	[INFO] 10.244.0.26:50437 - 7597 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001591703s
	[INFO] 10.244.0.26:49063 - 42612 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001943943s
	[INFO] 10.244.0.26:39378 - 64309 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00241089s
	[INFO] 10.244.0.30:44861 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00027604s
	[INFO] 10.244.0.30:48981 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134175s
	
	
	==> describe nodes <==
	Name:               addons-006450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-006450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=addons-006450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T14_21_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-006450
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-006450"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 14:21:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-006450
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 14:39:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 14:38:28 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 14:38:28 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 14:38:28 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 14:38:28 +0000   Mon, 06 Oct 2025 14:21:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-006450
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0364ef7d33ec438ea80b3763bd3b6ccc
	  System UUID:                35426571-e524-4094-b847-4e5d39cdb9e6
	  Boot ID:                    2fc2fcec-a145-448c-8b5d-9e614a6ff2df
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m59s
	  default                     cloud-spanner-emulator-85f6b7fc65-zjsh8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-mwfpm                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-k4m4k    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         17m
	  kube-system                 coredns-66bc5c9577-5b26c                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpathplugin-jdxpx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-addons-006450                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kube-apiserver-addons-006450                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-006450        200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-rr8rw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-006450                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 nvidia-device-plugin-daemonset-d29s2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 registry-creds-764b6fb674-gxwfl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-7d9fbc56b8-6bdv2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-7d9fbc56b8-8stqh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  local-path-storage          local-path-provisioner-648f6765c9-fmrx9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-nfj9q               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (4%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node addons-006450 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node addons-006450 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node addons-006450 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node addons-006450 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node addons-006450 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                kubelet          Node addons-006450 status is now: NodeHasSufficientPID
	  Normal   NodeReady                17m                kubelet          Node addons-006450 status is now: NodeReady
	  Normal   RegisteredNode           17m                node-controller  Node addons-006450 event: Registered Node addons-006450 in Controller
	
	
	==> dmesg <==
	[Oct 6 12:53] systemd-journald[226]: Failed to send stream file descriptor to service manager: Connection refused
	[Oct 6 12:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 13:11] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 14:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [57ec1a2227a7] <==
	{"level":"warn","ts":"2025-10-06T14:21:25.170769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.187747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.208847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:25.304509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:49.763248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:49.777779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.281548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.337199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.387982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.452451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.481768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.595747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.614909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.631591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.664368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.680487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.697752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.764439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.772435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T14:31:23.319583Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1712}
	{"level":"info","ts":"2025-10-06T14:31:23.387482Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1712,"took":"67.368638ms","hash":2638762742,"current-db-size-bytes":8409088,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4431872,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2025-10-06T14:31:23.387544Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2638762742,"revision":1712,"compact-revision":-1}
	{"level":"info","ts":"2025-10-06T14:36:23.326234Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2212}
	{"level":"info","ts":"2025-10-06T14:36:23.346470Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2212,"took":"19.456428ms","hash":564227051,"current-db-size-bytes":8409088,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":5521408,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2025-10-06T14:36:23.346528Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":564227051,"revision":2212,"compact-revision":1712}
	
	
	==> kernel <==
	 14:39:11 up 21:21,  0 user,  load average: 0.16, 0.43, 1.49
	Linux addons-006450 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [e5031a852e78] <==
	I1006 14:32:01.297393       1 handler.go:285] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I1006 14:32:01.765062       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1006 14:32:01.991228       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1006 14:32:02.040207       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1006 14:32:02.089227       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1006 14:32:02.089274       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1006 14:32:02.109502       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1006 14:32:02.362499       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1006 14:32:02.531192       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I1006 14:32:02.742209       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1006 14:32:02.772048       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W1006 14:32:02.823303       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1006 14:32:03.078900       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	I1006 14:32:03.083740       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1006 14:32:03.182326       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1006 14:32:03.182490       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1006 14:32:03.230092       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1006 14:32:04.084579       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1006 14:32:04.275777       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1006 14:32:21.681568       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33172: use of closed network connection
	E1006 14:32:22.105912       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33216: use of closed network connection
	I1006 14:32:31.969654       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.103.92"}
	I1006 14:33:02.323607       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1006 14:33:02.632504       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.99.116"}
	I1006 14:33:18.931472       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [9184b772f37f] <==
	E1006 14:38:13.217451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:38:14.790834       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:38:14.791947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:38:16.550016       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:38:16.551183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:38:22.000699       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:38:22.001986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:38:23.693482       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:38:23.694947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:38:36.970333       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:38:36.971410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:38:39.236875       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:38:39.237939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:38:49.395869       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:38:49.397212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:38:52.438978       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:38:52.440331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:38:53.740217       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:38:53.741693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:39:02.025578       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:39:02.026711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:39:02.391627       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:39:02.392776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:39:09.157099       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:39:09.158396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [2c89530d2d49] <==
	I1006 14:21:35.738189       1 server_linux.go:53] "Using iptables proxy"
	I1006 14:21:35.837556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 14:21:35.938392       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 14:21:35.938475       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1006 14:21:35.938596       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 14:21:36.026114       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 14:21:36.026170       1 server_linux.go:132] "Using iptables Proxier"
	I1006 14:21:36.061180       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 14:21:36.061523       1 server.go:527] "Version info" version="v1.34.1"
	I1006 14:21:36.061547       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:21:36.062743       1 config.go:200] "Starting service config controller"
	I1006 14:21:36.062767       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 14:21:36.063897       1 config.go:106] "Starting endpoint slice config controller"
	I1006 14:21:36.063910       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 14:21:36.063943       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 14:21:36.063947       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 14:21:36.064746       1 config.go:309] "Starting node config controller"
	I1006 14:21:36.064764       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 14:21:36.064771       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 14:21:36.163641       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 14:21:36.164636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 14:21:36.164662       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [16d61d5012e7] <==
	I1006 14:21:27.016131       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:21:27.020700       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:21:27.020968       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:21:27.021894       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 14:21:27.023886       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1006 14:21:27.029893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 14:21:27.030068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1006 14:21:27.038289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1006 14:21:27.038473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 14:21:27.038518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1006 14:21:27.038557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1006 14:21:27.040442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1006 14:21:27.040803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1006 14:21:27.040860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1006 14:21:27.040908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 14:21:27.040970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 14:21:27.041025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 14:21:27.041090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 14:21:27.041145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 14:21:27.041189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 14:21:27.041328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 14:21:27.041374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1006 14:21:27.041451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 14:21:27.041497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1006 14:21:28.621743       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 14:37:46 addons-006450 kubelet[2258]: E1006 14:37:46.754866    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:37:47 addons-006450 kubelet[2258]: E1006 14:37:47.752512    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:38:00 addons-006450 kubelet[2258]: E1006 14:38:00.201983    2258 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 06 14:38:00 addons-006450 kubelet[2258]: E1006 14:38:00.202079    2258 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8521a0d-ed5a-452c-9fe0-94e6798668f2-gcr-creds podName:a8521a0d-ed5a-452c-9fe0-94e6798668f2 nodeName:}" failed. No retries permitted until 2025-10-06 14:40:02.202058632 +0000 UTC m=+1113.591266174 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/a8521a0d-ed5a-452c-9fe0-94e6798668f2-gcr-creds") pod "registry-creds-764b6fb674-gxwfl" (UID: "a8521a0d-ed5a-452c-9fe0-94e6798668f2") : secret "registry-creds-gcr" not found
	Oct 06 14:38:01 addons-006450 kubelet[2258]: E1006 14:38:01.752697    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:38:01 addons-006450 kubelet[2258]: E1006 14:38:01.755884    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:38:14 addons-006450 kubelet[2258]: E1006 14:38:14.758026    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:38:16 addons-006450 kubelet[2258]: E1006 14:38:16.752268    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:38:26 addons-006450 kubelet[2258]: E1006 14:38:26.766745    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:38:28 addons-006450 kubelet[2258]: I1006 14:38:28.752496    2258 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-85f6b7fc65-zjsh8" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 14:38:29 addons-006450 kubelet[2258]: E1006 14:38:29.752541    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:38:39 addons-006450 kubelet[2258]: I1006 14:38:39.752158    2258 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-d29s2" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 14:38:40 addons-006450 kubelet[2258]: E1006 14:38:40.754604    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:38:41 addons-006450 kubelet[2258]: E1006 14:38:41.752387    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:38:51 addons-006450 kubelet[2258]: I1006 14:38:51.752377    2258 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 06 14:38:52 addons-006450 kubelet[2258]: E1006 14:38:52.752785    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:38:54 addons-006450 kubelet[2258]: E1006 14:38:54.064893    2258 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 06 14:38:54 addons-006450 kubelet[2258]: E1006 14:38:54.064944    2258 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 06 14:38:54 addons-006450 kubelet[2258]: E1006 14:38:54.065026    2258 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(effafea4-bd61-4243-a42c-72930366d494): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 06 14:38:54 addons-006450 kubelet[2258]: E1006 14:38:54.065058    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:39:04 addons-006450 kubelet[2258]: E1006 14:39:04.968997    2258 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 06 14:39:04 addons-006450 kubelet[2258]: E1006 14:39:04.969057    2258 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 06 14:39:04 addons-006450 kubelet[2258]: E1006 14:39:04.969133    2258 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(6e933703-adb9-4036-9530-9f2296a30c95): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 06 14:39:04 addons-006450 kubelet[2258]: E1006 14:39:04.969167    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:39:08 addons-006450 kubelet[2258]: E1006 14:39:08.762963    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	
	
	==> storage-provisioner [59bd3def26ae] <==
	W1006 14:38:45.305088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:47.309101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:47.316169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:49.319225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:49.326418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:51.334462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:51.342002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:53.345271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:53.349734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:55.353672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:55.358276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:57.361459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:57.366231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:59.369510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:38:59.374402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:39:01.377525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:39:01.382769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:39:03.386934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:39:03.393741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:39:05.400387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:39:05.404767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:39:07.408344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:39:07.415305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:39:09.418835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:39:09.424853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006450 -n addons-006450
helpers_test.go:269: (dbg) Run:  kubectl --context addons-006450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k registry-creds-764b6fb674-gxwfl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-006450 describe pod nginx task-pv-pod ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k registry-creds-764b6fb674-gxwfl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-006450 describe pod nginx task-pv-pod ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k registry-creds-764b6fb674-gxwfl: exit status 1 (104.044262ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-006450/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:33:02 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6jbnj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6jbnj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m10s                 default-scheduler  Successfully assigned default/nginx to addons-006450
	  Warning  Failed     4m34s (x3 over 6m9s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m4s (x5 over 6m9s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m3s (x5 over 6m9s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m3s (x2 over 5m29s)  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    58s (x22 over 6m8s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     58s (x22 over 6m8s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-006450/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:33:09 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zxjwt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-zxjwt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-006450
	  Normal   Pulling    3m (x5 over 6m3s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m (x5 over 6m2s)    kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m (x5 over 6m2s)    kubelet            Error: ErrImagePull
	  Warning  Failed     56s (x20 over 6m2s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    43s (x21 over 6m2s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-t2tnf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-s6s8k" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-gxwfl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-006450 describe pod nginx task-pv-pod ingress-nginx-admission-create-t2tnf ingress-nginx-admission-patch-s6s8k registry-creds-764b6fb674-gxwfl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 addons disable volumesnapshots --alsologtostderr -v=1: (1.051540281s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.847815035s)
--- FAIL: TestAddons/parallel/CSI (391.43s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (345.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-006450 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-006450 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-006450 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.271µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-006450
helpers_test.go:243: (dbg) docker inspect addons-006450:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90",
	        "Created": "2025-10-06T14:21:00.2900908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 806507,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:21:00.391293391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/hostname",
	        "HostsPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/hosts",
	        "LogPath": "/var/lib/docker/containers/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90/fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90-json.log",
	        "Name": "/addons-006450",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-006450:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-006450",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fedf355814c061eb6f17b5180c0bd769ec2161ecb06d34c5875476d85fde2d90",
	                "LowerDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec-init/diff:/var/lib/docker/overlay2/e377610d56c190eb4e6f5af0c002c2b677875f0d15e22ba07535ade05d2c2018/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3432813f340fafa44e27ff11706c6b649870e2a0f77abd3f13e73434f0226eec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-006450",
	                "Source": "/var/lib/docker/volumes/addons-006450/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-006450",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-006450",
	                "name.minikube.sigs.k8s.io": "addons-006450",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09ddbf4aed5db91393a32b35522feed3626a6a03e08f6e0448ebb5aad5998ddd",
	            "SandboxKey": "/var/run/docker/netns/09ddbf4aed5d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37506"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37507"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37510"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37508"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37509"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-006450": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:f4:99:c4:a9:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "165f6e38041442732f4da1d95818020ddb3d0bf16ac6242c03ef818c1b73d7fb",
	                    "EndpointID": "b2523cc159053c0b4c03cccafdf39f8b82bb8b5c7e911427f39eed28857482fc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-006450",
	                        "fedf355814c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-006450 -n addons-006450
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 logs -n 25: (1.100411214s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-023239                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-023239   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ --download-only -p download-docker-403886 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-403886 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ -p download-docker-403886                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-403886 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ --download-only -p binary-mirror-859483 --alsologtostderr --binary-mirror http://127.0.0.1:42473 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-859483   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ -p binary-mirror-859483                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-859483   │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ addons  │ enable dashboard -p addons-006450                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-006450                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ start   │ -p addons-006450 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:23 UTC │
	│ addons  │ addons-006450 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:31 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ enable headlamp -p addons-006450 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ ip      │ addons-006450 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:32 UTC │ 06 Oct 25 14:32 UTC │
	│ addons  │ addons-006450 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:33 UTC │ 06 Oct 25 14:33 UTC │
	│ addons  │ addons-006450 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │ 06 Oct 25 14:39 UTC │
	│ addons  │ addons-006450 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │ 06 Oct 25 14:39 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-006450                                                                                                                                                                                                                                                                                                                                                                                             │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │ 06 Oct 25 14:39 UTC │
	│ addons  │ addons-006450 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:39 UTC │ 06 Oct 25 14:39 UTC │
	│ addons  │ addons-006450 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:41 UTC │ 06 Oct 25 14:41 UTC │
	│ addons  │ addons-006450 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:41 UTC │ 06 Oct 25 14:41 UTC │
	│ addons  │ addons-006450 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:41 UTC │ 06 Oct 25 14:41 UTC │
	│ addons  │ addons-006450 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:41 UTC │ 06 Oct 25 14:41 UTC │
	│ addons  │ addons-006450 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-006450          │ jenkins │ v1.37.0 │ 06 Oct 25 14:41 UTC │ 06 Oct 25 14:41 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:20:33
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:20:33.934280  806109 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:20:33.934452  806109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:33.934482  806109 out.go:374] Setting ErrFile to fd 2...
	I1006 14:20:33.934503  806109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:33.934791  806109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 14:20:33.935342  806109 out.go:368] Setting JSON to false
	I1006 14:20:33.936278  806109 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":75786,"bootTime":1759684648,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1006 14:20:33.936380  806109 start.go:140] virtualization:  
	I1006 14:20:33.939820  806109 out.go:179] * [addons-006450] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 14:20:33.942845  806109 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:20:33.942925  806109 notify.go:220] Checking for updates...
	I1006 14:20:33.949235  806109 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:20:33.952125  806109 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:20:33.955049  806109 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	I1006 14:20:33.957833  806109 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 14:20:33.960596  806109 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:20:33.963595  806109 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:20:33.986303  806109 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 14:20:33.986439  806109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:34.050609  806109 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-06 14:20:34.04143491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:34.050721  806109 docker.go:318] overlay module found
	I1006 14:20:34.053842  806109 out.go:179] * Using the docker driver based on user configuration
	I1006 14:20:34.056712  806109 start.go:304] selected driver: docker
	I1006 14:20:34.056733  806109 start.go:924] validating driver "docker" against <nil>
	I1006 14:20:34.056748  806109 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:20:34.057477  806109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:34.111822  806109 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-06 14:20:34.102783115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:34.111982  806109 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:20:34.112211  806109 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:20:34.115275  806109 out.go:179] * Using Docker driver with root privileges
	I1006 14:20:34.118173  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:20:34.118253  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:20:34.118263  806109 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1006 14:20:34.118342  806109 start.go:348] cluster config:
	{Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1006 14:20:34.121483  806109 out.go:179] * Starting "addons-006450" primary control-plane node in "addons-006450" cluster
	I1006 14:20:34.124347  806109 cache.go:123] Beginning downloading kic base image for docker with docker
	I1006 14:20:34.127249  806109 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:20:34.130100  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:20:34.130168  806109 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1006 14:20:34.130177  806109 cache.go:58] Caching tarball of preloaded images
	I1006 14:20:34.130222  806109 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:20:34.130282  806109 preload.go:233] Found /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1006 14:20:34.130293  806109 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1006 14:20:34.130624  806109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json ...
	I1006 14:20:34.130655  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json: {Name:mk78082a38967c23c9e0fec5499d829d2aa5600d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:20:34.149434  806109 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 14:20:34.149575  806109 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 14:20:34.149597  806109 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1006 14:20:34.149602  806109 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1006 14:20:34.149610  806109 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1006 14:20:34.149626  806109 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1006 14:20:52.383725  806109 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1006 14:20:52.383777  806109 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:20:52.383807  806109 start.go:360] acquireMachinesLock for addons-006450: {Name:mk6a488a7fef2004d8c41401b261288db1a55041 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:20:52.383940  806109 start.go:364] duration metric: took 111.276µs to acquireMachinesLock for "addons-006450"
	I1006 14:20:52.383972  806109 start.go:93] Provisioning new machine with config: &{Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1006 14:20:52.384058  806109 start.go:125] createHost starting for "" (driver="docker")
	I1006 14:20:52.387398  806109 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1006 14:20:52.387686  806109 start.go:159] libmachine.API.Create for "addons-006450" (driver="docker")
	I1006 14:20:52.387754  806109 client.go:168] LocalClient.Create starting
	I1006 14:20:52.387880  806109 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem
	I1006 14:20:52.755986  806109 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem
	I1006 14:20:54.000215  806109 cli_runner.go:164] Run: docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1006 14:20:54.021843  806109 cli_runner.go:211] docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1006 14:20:54.021935  806109 network_create.go:284] running [docker network inspect addons-006450] to gather additional debugging logs...
	I1006 14:20:54.021951  806109 cli_runner.go:164] Run: docker network inspect addons-006450
	W1006 14:20:54.038245  806109 cli_runner.go:211] docker network inspect addons-006450 returned with exit code 1
	I1006 14:20:54.038287  806109 network_create.go:287] error running [docker network inspect addons-006450]: docker network inspect addons-006450: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-006450 not found
	I1006 14:20:54.038299  806109 network_create.go:289] output of [docker network inspect addons-006450]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-006450 not found
	
	** /stderr **
	I1006 14:20:54.038438  806109 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:20:54.055471  806109 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d4c380}
	I1006 14:20:54.055517  806109 network_create.go:124] attempt to create docker network addons-006450 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1006 14:20:54.055572  806109 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-006450 addons-006450
	I1006 14:20:54.110341  806109 network_create.go:108] docker network addons-006450 192.168.49.0/24 created
	I1006 14:20:54.110371  806109 kic.go:121] calculated static IP "192.168.49.2" for the "addons-006450" container
	I1006 14:20:54.110459  806109 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1006 14:20:54.127884  806109 cli_runner.go:164] Run: docker volume create addons-006450 --label name.minikube.sigs.k8s.io=addons-006450 --label created_by.minikube.sigs.k8s.io=true
	I1006 14:20:54.148808  806109 oci.go:103] Successfully created a docker volume addons-006450
	I1006 14:20:54.148892  806109 cli_runner.go:164] Run: docker run --rm --name addons-006450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --entrypoint /usr/bin/test -v addons-006450:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1006 14:20:56.324467  806109 cli_runner.go:217] Completed: docker run --rm --name addons-006450-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --entrypoint /usr/bin/test -v addons-006450:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.175532295s)
	I1006 14:20:56.324511  806109 oci.go:107] Successfully prepared a docker volume addons-006450
	I1006 14:20:56.324545  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:20:56.324566  806109 kic.go:194] Starting extracting preloaded images to volume ...
	I1006 14:20:56.324627  806109 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-006450:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1006 14:21:00.168028  806109 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-006450:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (3.843356071s)
	I1006 14:21:00.168062  806109 kic.go:203] duration metric: took 3.843492791s to extract preloaded images to volume ...
	W1006 14:21:00.168228  806109 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1006 14:21:00.168353  806109 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1006 14:21:00.269120  806109 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-006450 --name addons-006450 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-006450 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-006450 --network addons-006450 --ip 192.168.49.2 --volume addons-006450:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1006 14:21:00.667135  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Running}}
	I1006 14:21:00.686913  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:00.708915  806109 cli_runner.go:164] Run: docker exec addons-006450 stat /var/lib/dpkg/alternatives/iptables
	I1006 14:21:00.766467  806109 oci.go:144] the created container "addons-006450" has a running status.
	I1006 14:21:00.766496  806109 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa...
	I1006 14:21:01.209222  806109 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1006 14:21:01.244403  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:01.278442  806109 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1006 14:21:01.278462  806109 kic_runner.go:114] Args: [docker exec --privileged addons-006450 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1006 14:21:01.342721  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:01.366223  806109 machine.go:93] provisionDockerMachine start ...
	I1006 14:21:01.366312  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.386115  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.388381  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.388404  806109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:21:01.583723  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006450
	
	I1006 14:21:01.583748  806109 ubuntu.go:182] provisioning hostname "addons-006450"
	I1006 14:21:01.583829  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.604321  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.604631  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.604648  806109 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-006450 && echo "addons-006450" | sudo tee /etc/hostname
	I1006 14:21:01.762558  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-006450
	
	I1006 14:21:01.762702  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:01.783081  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:01.783379  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:01.783396  806109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-006450' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-006450/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-006450' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:21:01.932033  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:21:01.932056  806109 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-803497/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-803497/.minikube}
	I1006 14:21:01.932087  806109 ubuntu.go:190] setting up certificates
	I1006 14:21:01.932101  806109 provision.go:84] configureAuth start
	I1006 14:21:01.932162  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:01.953264  806109 provision.go:143] copyHostCerts
	I1006 14:21:01.953391  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/ca.pem (1082 bytes)
	I1006 14:21:01.953509  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/cert.pem (1123 bytes)
	I1006 14:21:01.953572  806109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/key.pem (1675 bytes)
	I1006 14:21:01.953642  806109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem org=jenkins.addons-006450 san=[127.0.0.1 192.168.49.2 addons-006450 localhost minikube]
	I1006 14:21:02.364998  806109 provision.go:177] copyRemoteCerts
	I1006 14:21:02.365098  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:21:02.365155  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.381521  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:02.475833  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:21:02.494054  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:21:02.512540  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1006 14:21:02.530771  806109 provision.go:87] duration metric: took 598.646522ms to configureAuth
	I1006 14:21:02.530795  806109 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:21:02.531031  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:02.531089  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.548485  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.548797  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.548814  806109 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1006 14:21:02.680553  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1006 14:21:02.680572  806109 ubuntu.go:71] root file system type: overlay
	I1006 14:21:02.680735  806109 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1006 14:21:02.680812  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.697880  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.698189  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.698287  806109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1006 14:21:02.846019  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1006 14:21:02.846167  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:02.863632  806109 main.go:141] libmachine: Using SSH client type: native
	I1006 14:21:02.864002  806109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37506 <nil> <nil>}
	I1006 14:21:02.864029  806109 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1006 14:21:03.799164  806109 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:56:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-06 14:21:02.840466123 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1006 14:21:03.799202  806109 machine.go:96] duration metric: took 2.432959766s to provisionDockerMachine
	I1006 14:21:03.799214  806109 client.go:171] duration metric: took 11.411453149s to LocalClient.Create
	I1006 14:21:03.799235  806109 start.go:167] duration metric: took 11.41157629s to libmachine.API.Create "addons-006450"
	I1006 14:21:03.799246  806109 start.go:293] postStartSetup for "addons-006450" (driver="docker")
	I1006 14:21:03.799257  806109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:21:03.799333  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:21:03.799381  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:03.817018  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:03.911433  806109 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:21:03.914606  806109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:21:03.914683  806109 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:21:03.914699  806109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/addons for local assets ...
	I1006 14:21:03.914767  806109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/files for local assets ...
	I1006 14:21:03.914795  806109 start.go:296] duration metric: took 115.542737ms for postStartSetup
	I1006 14:21:03.915135  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:03.931532  806109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/config.json ...
	I1006 14:21:03.931854  806109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:21:03.931910  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:03.948768  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.041025  806109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:21:04.046229  806109 start.go:128] duration metric: took 11.662156071s to createHost
	I1006 14:21:04.046252  806109 start.go:83] releasing machines lock for "addons-006450", held for 11.662297525s
	I1006 14:21:04.046327  806109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-006450
	I1006 14:21:04.063754  806109 ssh_runner.go:195] Run: cat /version.json
	I1006 14:21:04.063815  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:04.063893  806109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:21:04.063975  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:04.082777  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.099024  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:04.268948  806109 ssh_runner.go:195] Run: systemctl --version
	I1006 14:21:04.275561  806109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:21:04.279819  806109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:21:04.279895  806109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:21:04.306291  806109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1006 14:21:04.306318  806109 start.go:495] detecting cgroup driver to use...
	I1006 14:21:04.306351  806109 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:21:04.306446  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:04.320125  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1006 14:21:04.329116  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1006 14:21:04.338037  806109 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1006 14:21:04.338156  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1006 14:21:04.347404  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:21:04.357144  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1006 14:21:04.366129  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:21:04.374845  806109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:21:04.382821  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1006 14:21:04.391940  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1006 14:21:04.400832  806109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1006 14:21:04.409604  806109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:21:04.417019  806109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:21:04.424313  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:04.532131  806109 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1006 14:21:04.625905  806109 start.go:495] detecting cgroup driver to use...
	I1006 14:21:04.625977  806109 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:21:04.626053  806109 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1006 14:21:04.640910  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:04.654413  806109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:21:04.685901  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:21:04.698603  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1006 14:21:04.711790  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:21:04.725497  806109 ssh_runner.go:195] Run: which cri-dockerd
	I1006 14:21:04.729345  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1006 14:21:04.737737  806109 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1006 14:21:04.751393  806109 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1006 14:21:04.873692  806109 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1006 14:21:04.984971  806109 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1006 14:21:04.985108  806109 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1006 14:21:05.002843  806109 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1006 14:21:05.020602  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:05.142830  806109 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1006 14:21:05.525909  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:21:05.538352  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1006 14:21:05.551902  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:21:05.567756  806109 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1006 14:21:05.691941  806109 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1006 14:21:05.814431  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:05.934017  806109 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1006 14:21:05.949991  806109 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1006 14:21:05.962662  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:06.092789  806109 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1006 14:21:06.164834  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:21:06.178359  806109 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1006 14:21:06.178520  806109 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1006 14:21:06.182231  806109 start.go:563] Will wait 60s for crictl version
	I1006 14:21:06.182343  806109 ssh_runner.go:195] Run: which crictl
	I1006 14:21:06.185820  806109 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:21:06.209958  806109 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1006 14:21:06.210077  806109 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:21:06.232534  806109 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:21:06.261297  806109 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1006 14:21:06.261408  806109 cli_runner.go:164] Run: docker network inspect addons-006450 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:21:06.277505  806109 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:21:06.281321  806109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:21:06.291363  806109 kubeadm.go:883] updating cluster {Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:21:06.291470  806109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:21:06.291533  806109 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:21:06.310531  806109 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1006 14:21:06.310560  806109 docker.go:621] Images already preloaded, skipping extraction
	I1006 14:21:06.310627  806109 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:21:06.329469  806109 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1006 14:21:06.329494  806109 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:21:06.329511  806109 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1006 14:21:06.329612  806109 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-006450 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:21:06.329683  806109 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1006 14:21:06.383455  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:21:06.383492  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:21:06.383512  806109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:21:06.383538  806109 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-006450 NodeName:addons-006450 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:21:06.383695  806109 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-006450"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:21:06.383769  806109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:21:06.391605  806109 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:21:06.391780  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:21:06.399572  806109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1006 14:21:06.412296  806109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:21:06.425462  806109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1006 14:21:06.438424  806109 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:21:06.442129  806109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1006 14:21:06.452170  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:06.565870  806109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:21:06.583339  806109 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450 for IP: 192.168.49.2
	I1006 14:21:06.583363  806109 certs.go:195] generating shared ca certs ...
	I1006 14:21:06.583383  806109 certs.go:227] acquiring lock for ca certs: {Name:mk78547ccc35462965e66385811a001935f7f131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.583518  806109 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key
	I1006 14:21:06.758169  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt ...
	I1006 14:21:06.758199  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt: {Name:mke50bad3f8d3d8c6fc7003f3930a8a3fa326b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.758398  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key ...
	I1006 14:21:06.758412  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key: {Name:mk5abe63bfac59b481f1b34a2e6312b79c376290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:06.758508  806109 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key
	I1006 14:21:07.226648  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt ...
	I1006 14:21:07.226681  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt: {Name:mk35f86863953865131b747e65133218cef7ac69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:07.226896  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key ...
	I1006 14:21:07.226910  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key: {Name:mk32f77223b3be8cca86a275e013030fd8c48071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:07.227011  806109 certs.go:257] generating profile certs ...
	I1006 14:21:07.227078  806109 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key
	I1006 14:21:07.227095  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt with IP's: []
	I1006 14:21:08.232319  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt ...
	I1006 14:21:08.232348  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: {Name:mk237396132558310e9472dccd1a03e68855c562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.232531  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key ...
	I1006 14:21:08.232540  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.key: {Name:mkddc2eaac1b60c97f1b0888b122f0d14ff81585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.232614  806109 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa
	I1006 14:21:08.232629  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1006 14:21:08.361861  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa ...
	I1006 14:21:08.361891  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa: {Name:mk44f5f6071204e4219adaa4cbde67bf1f671150 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.362071  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa ...
	I1006 14:21:08.362085  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa: {Name:mkaddbc6367afe0cdf204382e298fb821349ec34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:08.362173  806109 certs.go:382] copying /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt.d811b9fa -> /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt
	I1006 14:21:08.362251  806109 certs.go:386] copying /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key.d811b9fa -> /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key
	I1006 14:21:08.362308  806109 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key
	I1006 14:21:08.362337  806109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt with IP's: []
	I1006 14:21:09.174420  806109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt ...
	I1006 14:21:09.174451  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt: {Name:mk6a018d5a25b41127abffe602062c5fb3c9da1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:09.174648  806109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key ...
	I1006 14:21:09.174662  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key: {Name:mk882903eb03fda7b8a7b7a45601eaab350263dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:09.174869  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:21:09.174912  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:21:09.174936  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:21:09.174963  806109 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem (1675 bytes)
	I1006 14:21:09.175647  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:21:09.195248  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 14:21:09.214696  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:21:09.234148  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 14:21:09.252534  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1006 14:21:09.270877  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:21:09.289342  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:21:09.307151  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:21:09.325295  806109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:21:09.343473  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:21:09.356830  806109 ssh_runner.go:195] Run: openssl version
	I1006 14:21:09.363194  806109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:21:09.371688  806109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.375519  806109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 14:21 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.375603  806109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:21:09.421333  806109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:21:09.430436  806109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:21:09.434631  806109 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1006 14:21:09.434680  806109 kubeadm.go:400] StartCluster: {Name:addons-006450 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-006450 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:21:09.434811  806109 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1006 14:21:09.456777  806109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:21:09.465021  806109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:21:09.473033  806109 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1006 14:21:09.473109  806109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:21:09.480866  806109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1006 14:21:09.480886  806109 kubeadm.go:157] found existing configuration files:
	
	I1006 14:21:09.480957  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1006 14:21:09.488809  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1006 14:21:09.488875  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1006 14:21:09.496674  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1006 14:21:09.504791  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1006 14:21:09.504865  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:21:09.512822  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1006 14:21:09.520596  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1006 14:21:09.520672  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:21:09.528333  806109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1006 14:21:09.536500  806109 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1006 14:21:09.536573  806109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:21:09.544325  806109 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1006 14:21:09.582751  806109 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1006 14:21:09.582817  806109 kubeadm.go:318] [preflight] Running pre-flight checks
	I1006 14:21:09.609398  806109 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1006 14:21:09.609476  806109 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1006 14:21:09.609518  806109 kubeadm.go:318] OS: Linux
	I1006 14:21:09.609570  806109 kubeadm.go:318] CGROUPS_CPU: enabled
	I1006 14:21:09.609625  806109 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1006 14:21:09.609679  806109 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1006 14:21:09.609733  806109 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1006 14:21:09.609792  806109 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1006 14:21:09.609847  806109 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1006 14:21:09.609902  806109 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1006 14:21:09.609955  806109 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1006 14:21:09.610011  806109 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1006 14:21:09.690823  806109 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1006 14:21:09.690944  806109 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1006 14:21:09.691059  806109 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1006 14:21:09.716052  806109 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1006 14:21:09.722414  806109 out.go:252]   - Generating certificates and keys ...
	I1006 14:21:09.722525  806109 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1006 14:21:09.722604  806109 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1006 14:21:10.515752  806109 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1006 14:21:11.397580  806109 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1006 14:21:12.455188  806109 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1006 14:21:12.900218  806109 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1006 14:21:13.333042  806109 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1006 14:21:13.333192  806109 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-006450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:21:13.558599  806109 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1006 14:21:13.558992  806109 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-006450 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1006 14:21:14.483025  806109 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1006 14:21:15.088755  806109 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1006 14:21:15.636700  806109 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1006 14:21:15.637033  806109 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1006 14:21:16.739302  806109 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1006 14:21:17.694897  806109 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1006 14:21:18.343756  806109 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1006 14:21:18.712603  806109 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1006 14:21:19.266809  806109 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1006 14:21:19.267485  806109 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1006 14:21:19.270758  806109 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1006 14:21:19.274504  806109 out.go:252]   - Booting up control plane ...
	I1006 14:21:19.274628  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1006 14:21:19.274721  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1006 14:21:19.275790  806109 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1006 14:21:19.292829  806109 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1006 14:21:19.293280  806109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1006 14:21:19.301074  806109 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1006 14:21:19.301395  806109 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1006 14:21:19.301643  806109 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1006 14:21:19.440373  806109 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1006 14:21:19.440504  806109 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1006 14:21:20.940044  806109 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501293606s
	I1006 14:21:20.940318  806109 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1006 14:21:20.940416  806109 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1006 14:21:20.940516  806109 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1006 14:21:20.940602  806109 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1006 14:21:24.828532  806109 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.887425512s
	I1006 14:21:27.037731  806109 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.097440124s
	I1006 14:21:27.942161  806109 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001481359s
	I1006 14:21:27.961418  806109 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1006 14:21:27.977744  806109 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1006 14:21:27.992347  806109 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1006 14:21:27.992563  806109 kubeadm.go:318] [mark-control-plane] Marking the node addons-006450 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1006 14:21:28.013758  806109 kubeadm.go:318] [bootstrap-token] Using token: e1p0fh.afy23ij81unzzcb1
	I1006 14:21:28.016851  806109 out.go:252]   - Configuring RBAC rules ...
	I1006 14:21:28.016992  806109 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1006 14:21:28.022251  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1006 14:21:28.031560  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1006 14:21:28.036500  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1006 14:21:28.041064  806109 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1006 14:21:28.048112  806109 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1006 14:21:28.349107  806109 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1006 14:21:28.790402  806109 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1006 14:21:29.351014  806109 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1006 14:21:29.352283  806109 kubeadm.go:318] 
	I1006 14:21:29.352364  806109 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1006 14:21:29.352375  806109 kubeadm.go:318] 
	I1006 14:21:29.352461  806109 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1006 14:21:29.352472  806109 kubeadm.go:318] 
	I1006 14:21:29.352498  806109 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1006 14:21:29.352567  806109 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1006 14:21:29.352625  806109 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1006 14:21:29.352634  806109 kubeadm.go:318] 
	I1006 14:21:29.352691  806109 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1006 14:21:29.352700  806109 kubeadm.go:318] 
	I1006 14:21:29.352750  806109 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1006 14:21:29.352759  806109 kubeadm.go:318] 
	I1006 14:21:29.352815  806109 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1006 14:21:29.352899  806109 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1006 14:21:29.352974  806109 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1006 14:21:29.352983  806109 kubeadm.go:318] 
	I1006 14:21:29.353071  806109 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1006 14:21:29.353153  806109 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1006 14:21:29.353161  806109 kubeadm.go:318] 
	I1006 14:21:29.353249  806109 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token e1p0fh.afy23ij81unzzcb1 \
	I1006 14:21:29.353360  806109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:76fb571382ca9706d46d85899e8a2e961f0c518218722f3b163e5bd4963fb9a1 \
	I1006 14:21:29.353397  806109 kubeadm.go:318] 	--control-plane 
	I1006 14:21:29.353406  806109 kubeadm.go:318] 
	I1006 14:21:29.353495  806109 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1006 14:21:29.353503  806109 kubeadm.go:318] 
	I1006 14:21:29.353588  806109 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token e1p0fh.afy23ij81unzzcb1 \
	I1006 14:21:29.353698  806109 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:76fb571382ca9706d46d85899e8a2e961f0c518218722f3b163e5bd4963fb9a1 
	I1006 14:21:29.356907  806109 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1006 14:21:29.357135  806109 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1006 14:21:29.357260  806109 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1006 14:21:29.357283  806109 cni.go:84] Creating CNI manager for ""
	I1006 14:21:29.357298  806109 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:21:29.360240  806109 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1006 14:21:29.363197  806109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1006 14:21:29.371108  806109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1006 14:21:29.386109  806109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 14:21:29.386176  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:29.386250  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-006450 minikube.k8s.io/updated_at=2025_10_06T14_21_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81 minikube.k8s.io/name=addons-006450 minikube.k8s.io/primary=true
	I1006 14:21:29.530062  806109 ops.go:34] apiserver oom_adj: -16
	I1006 14:21:29.530192  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:30.031190  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:30.530267  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:31.030839  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:31.530611  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:32.030258  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:32.530722  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:33.030864  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:33.530331  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:34.030732  806109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1006 14:21:34.138751  806109 kubeadm.go:1113] duration metric: took 4.752637843s to wait for elevateKubeSystemPrivileges
	I1006 14:21:34.138779  806109 kubeadm.go:402] duration metric: took 24.704102384s to StartCluster
	I1006 14:21:34.138798  806109 settings.go:142] acquiring lock: {Name:mk86d6d1803b10e0f74b7ca9be175f37419eb162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:34.138932  806109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:21:34.139342  806109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/kubeconfig: {Name:mkd0e7dce0fefee9d8326b7f5e1280f715df58c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:21:34.139547  806109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1006 14:21:34.139652  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1006 14:21:34.139913  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:34.139945  806109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1006 14:21:34.140026  806109 addons.go:69] Setting yakd=true in profile "addons-006450"
	I1006 14:21:34.140047  806109 addons.go:238] Setting addon yakd=true in "addons-006450"
	I1006 14:21:34.140069  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.140558  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.140784  806109 addons.go:69] Setting inspektor-gadget=true in profile "addons-006450"
	I1006 14:21:34.140802  806109 addons.go:238] Setting addon inspektor-gadget=true in "addons-006450"
	I1006 14:21:34.140825  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.141217  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.141581  806109 addons.go:69] Setting metrics-server=true in profile "addons-006450"
	I1006 14:21:34.141646  806109 addons.go:238] Setting addon metrics-server=true in "addons-006450"
	I1006 14:21:34.141685  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.142139  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.143205  806109 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-006450"
	I1006 14:21:34.143238  806109 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-006450"
	I1006 14:21:34.143270  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.143806  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.144933  806109 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-006450"
	I1006 14:21:34.144962  806109 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-006450"
	I1006 14:21:34.144997  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.145499  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.146720  806109 addons.go:69] Setting cloud-spanner=true in profile "addons-006450"
	I1006 14:21:34.146748  806109 addons.go:238] Setting addon cloud-spanner=true in "addons-006450"
	I1006 14:21:34.146777  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.147335  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.156945  806109 addons.go:69] Setting registry=true in profile "addons-006450"
	I1006 14:21:34.157043  806109 addons.go:238] Setting addon registry=true in "addons-006450"
	I1006 14:21:34.157131  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.157718  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.176071  806109 addons.go:69] Setting registry-creds=true in profile "addons-006450"
	I1006 14:21:34.176145  806109 addons.go:238] Setting addon registry-creds=true in "addons-006450"
	I1006 14:21:34.176197  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.176774  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.185281  806109 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-006450"
	I1006 14:21:34.185740  806109 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-006450"
	I1006 14:21:34.185846  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.187060  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.193152  806109 addons.go:69] Setting storage-provisioner=true in profile "addons-006450"
	I1006 14:21:34.193188  806109 addons.go:238] Setting addon storage-provisioner=true in "addons-006450"
	I1006 14:21:34.193224  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.193707  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.207765  806109 addons.go:69] Setting default-storageclass=true in profile "addons-006450"
	I1006 14:21:34.207813  806109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-006450"
	I1006 14:21:34.208233  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.208517  806109 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-006450"
	I1006 14:21:34.208563  806109 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-006450"
	I1006 14:21:34.208903  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.218653  806109 addons.go:69] Setting volcano=true in profile "addons-006450"
	I1006 14:21:34.219019  806109 addons.go:238] Setting addon volcano=true in "addons-006450"
	I1006 14:21:34.219129  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.219730  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.219851  806109 addons.go:69] Setting gcp-auth=true in profile "addons-006450"
	I1006 14:21:34.219900  806109 mustload.go:65] Loading cluster: addons-006450
	I1006 14:21:34.220156  806109 config.go:182] Loaded profile config "addons-006450": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:21:34.220463  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.244567  806109 addons.go:69] Setting volumesnapshots=true in profile "addons-006450"
	I1006 14:21:34.244607  806109 addons.go:238] Setting addon volumesnapshots=true in "addons-006450"
	I1006 14:21:34.244648  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.245166  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.256667  806109 out.go:179] * Verifying Kubernetes components...
	I1006 14:21:34.256935  806109 addons.go:69] Setting ingress=true in profile "addons-006450"
	I1006 14:21:34.256960  806109 addons.go:238] Setting addon ingress=true in "addons-006450"
	I1006 14:21:34.257001  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.257557  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.285413  806109 addons.go:69] Setting ingress-dns=true in profile "addons-006450"
	I1006 14:21:34.285459  806109 addons.go:238] Setting addon ingress-dns=true in "addons-006450"
	I1006 14:21:34.285510  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.286061  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.332782  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1006 14:21:34.338069  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1006 14:21:34.338156  806109 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1006 14:21:34.338257  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.357721  806109 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1006 14:21:34.362166  806109 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 14:21:34.362235  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1006 14:21:34.362331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.380568  806109 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1006 14:21:34.383806  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1006 14:21:34.383934  806109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1006 14:21:34.384103  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.384670  806109 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1006 14:21:34.393975  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1006 14:21:34.394079  806109 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1006 14:21:34.394248  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.420035  806109 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1006 14:21:34.423442  806109 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 14:21:34.423541  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1006 14:21:34.423642  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.431543  806109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:21:34.457975  806109 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1006 14:21:34.497876  806109 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1006 14:21:34.498037  806109 out.go:179]   - Using image docker.io/registry:3.0.0
	I1006 14:21:34.510678  806109 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1006 14:21:34.519256  806109 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 14:21:34.519362  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1006 14:21:34.519521  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.526420  806109 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1006 14:21:34.526447  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1006 14:21:34.526546  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.528693  806109 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1006 14:21:34.528724  806109 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1006 14:21:34.528812  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.532917  806109 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1006 14:21:34.536266  806109 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1006 14:21:34.537209  806109 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1006 14:21:34.537230  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1006 14:21:34.537331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.542063  806109 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-006450"
	I1006 14:21:34.542107  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.542545  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.581749  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1006 14:21:34.585130  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:34.588025  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:34.590892  806109 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 14:21:34.590917  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1006 14:21:34.591008  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.605945  806109 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 14:21:34.605973  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1006 14:21:34.606041  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.626809  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.628682  806109 addons.go:238] Setting addon default-storageclass=true in "addons-006450"
	I1006 14:21:34.628721  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.629125  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:34.636774  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1006 14:21:34.640152  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:34.649003  806109 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1006 14:21:34.649626  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.656019  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1006 14:21:34.658838  806109 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1006 14:21:34.664662  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.676340  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1006 14:21:34.676611  806109 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1006 14:21:34.703838  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.723458  806109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:21:34.726631  806109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:34.726657  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:21:34.726743  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.752688  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1006 14:21:34.756756  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1006 14:21:34.760053  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1006 14:21:34.763938  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1006 14:21:34.769389  806109 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1006 14:21:34.772287  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1006 14:21:34.772317  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1006 14:21:34.772394  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.772747  806109 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1006 14:21:34.772787  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1006 14:21:34.772862  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.804304  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.808420  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.822462  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.823147  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.867044  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.870362  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.874341  806109 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1006 14:21:34.876981  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.878063  806109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:34.878079  806109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:21:34.878140  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.888089  806109 out.go:179]   - Using image docker.io/busybox:stable
	I1006 14:21:34.891239  806109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 14:21:34.891265  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1006 14:21:34.891331  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:34.920306  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:34.945324  806109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1006 14:21:34.947994  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	W1006 14:21:34.970150  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:34.970251  806109 retry.go:31] will retry after 147.40402ms: ssh: handshake failed: EOF
	W1006 14:21:34.972537  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:34.972566  806109 retry.go:31] will retry after 281.687683ms: ssh: handshake failed: EOF
	I1006 14:21:34.975793  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:35.005444  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	W1006 14:21:35.009771  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.009812  806109 retry.go:31] will retry after 207.774831ms: ssh: handshake failed: EOF
	I1006 14:21:35.012483  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:35.127149  806109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1006 14:21:35.219409  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.219491  806109 retry.go:31] will retry after 414.252414ms: ssh: handshake failed: EOF
	W1006 14:21:35.255517  806109 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1006 14:21:35.255595  806109 retry.go:31] will retry after 378.429324ms: ssh: handshake failed: EOF
	I1006 14:21:35.851743  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1006 14:21:35.853206  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1006 14:21:35.989160  806109 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:35.989181  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1006 14:21:36.111352  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1006 14:21:36.151070  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1006 14:21:36.151165  806109 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1006 14:21:36.192781  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1006 14:21:36.192855  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1006 14:21:36.226627  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1006 14:21:36.226690  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1006 14:21:36.243375  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1006 14:21:36.255630  806109 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1006 14:21:36.255746  806109 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1006 14:21:36.350477  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1006 14:21:36.350562  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1006 14:21:36.377661  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1006 14:21:36.396057  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:36.399305  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:21:36.426714  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1006 14:21:36.426796  806109 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1006 14:21:36.427640  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:21:36.435627  806109 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1006 14:21:36.435647  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1006 14:21:36.443471  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1006 14:21:36.479083  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1006 14:21:36.481831  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1006 14:21:36.481904  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1006 14:21:36.527849  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1006 14:21:36.527927  806109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1006 14:21:36.537515  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1006 14:21:36.537591  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1006 14:21:36.597935  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1006 14:21:36.598000  806109 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1006 14:21:36.601149  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1006 14:21:36.790553  806109 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 14:21:36.790647  806109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1006 14:21:36.821053  806109 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1006 14:21:36.821135  806109 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1006 14:21:36.867220  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1006 14:21:36.871426  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1006 14:21:36.871504  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1006 14:21:36.880338  806109 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.753102328s)
	I1006 14:21:36.880515  806109 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.935150087s)
	I1006 14:21:36.880679  806109 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1006 14:21:36.881380  806109 node_ready.go:35] waiting up to 6m0s for node "addons-006450" to be "Ready" ...
	I1006 14:21:36.887470  806109 node_ready.go:49] node "addons-006450" is "Ready"
	I1006 14:21:36.887509  806109 node_ready.go:38] duration metric: took 6.110221ms for node "addons-006450" to be "Ready" ...
	I1006 14:21:36.887526  806109 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:21:36.887614  806109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:21:36.891551  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1006 14:21:37.041224  806109 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1006 14:21:37.041263  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1006 14:21:37.185540  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1006 14:21:37.185582  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1006 14:21:37.245756  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1006 14:21:37.245794  806109 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1006 14:21:37.320678  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1006 14:21:37.384934  806109 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-006450" context rescaled to 1 replicas
	I1006 14:21:37.439254  806109 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1006 14:21:37.439280  806109 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1006 14:21:37.491833  806109 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:37.491853  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1006 14:21:37.710140  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.858315722s)
	I1006 14:21:37.710258  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.856978431s)
	I1006 14:21:37.797019  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1006 14:21:37.797087  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1006 14:21:38.055462  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.944020191s)
	I1006 14:21:38.066071  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:38.209415  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1006 14:21:38.209495  806109 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1006 14:21:38.308015  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1006 14:21:38.308047  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1006 14:21:38.731766  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1006 14:21:38.731811  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1006 14:21:38.884673  806109 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 14:21:38.884702  806109 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1006 14:21:39.201324  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1006 14:21:42.056707  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1006 14:21:42.056850  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:42.096992  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:43.527695  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.284260443s)
	I1006 14:21:43.527736  806109 addons.go:479] Verifying addon ingress=true in "addons-006450"
	I1006 14:21:43.527908  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.150170305s)
	I1006 14:21:43.528008  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.131874449s)
	W1006 14:21:43.528029  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:43.528050  806109 retry.go:31] will retry after 227.873764ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:43.528137  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.128758076s)
	I1006 14:21:43.528185  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.100376481s)
	I1006 14:21:43.528469  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.084972148s)
	I1006 14:21:43.528566  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.04940419s)
	I1006 14:21:43.528706  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.927477657s)
	I1006 14:21:43.528726  806109 addons.go:479] Verifying addon registry=true in "addons-006450"
	I1006 14:21:43.532546  806109 out.go:179] * Verifying ingress addon...
	I1006 14:21:43.534069  806109 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1006 14:21:43.534935  806109 out.go:179] * Verifying registry addon...
	I1006 14:21:43.537759  806109 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1006 14:21:43.540886  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1006 14:21:43.565742  806109 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1006 14:21:43.565781  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:43.568676  806109 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1006 14:21:43.568708  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1006 14:21:43.576208  806109 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1006 14:21:43.749034  806109 addons.go:238] Setting addon gcp-auth=true in "addons-006450"
	I1006 14:21:43.749121  806109 host.go:66] Checking if "addons-006450" exists ...
	I1006 14:21:43.749685  806109 cli_runner.go:164] Run: docker container inspect addons-006450 --format={{.State.Status}}
	I1006 14:21:43.756132  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:43.787457  806109 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1006 14:21:43.787548  806109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-006450
	I1006 14:21:43.815805  806109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37506 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/addons-006450/id_rsa Username:docker}
	I1006 14:21:44.114671  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:44.115253  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:44.548438  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:44.550543  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:45.046803  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:45.049237  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:45.581293  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:45.581847  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.153351  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:46.153798  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.640887  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:46.643861  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.081245  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.081634  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:47.568674  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:47.569175  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.056720  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:48.057131  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.585162  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.717857623s)
	I1006 14:21:48.585271  806109 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (11.697643759s)
	I1006 14:21:48.585318  806109 api_server.go:72] duration metric: took 14.445740723s to wait for apiserver process to appear ...
	I1006 14:21:48.585343  806109 api_server.go:88] waiting for apiserver healthz status ...
	I1006 14:21:48.585375  806109 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1006 14:21:48.585803  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.694205832s)
	I1006 14:21:48.585856  806109 addons.go:479] Verifying addon metrics-server=true in "addons-006450"
	I1006 14:21:48.585929  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.265223311s)
	I1006 14:21:48.586329  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.520142743s)
	W1006 14:21:48.586371  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 14:21:48.586391  806109 retry.go:31] will retry after 354.82385ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1006 14:21:48.586570  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.385202699s)
	I1006 14:21:48.586585  806109 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-006450"
	I1006 14:21:48.590422  806109 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-006450 service yakd-dashboard -n yakd-dashboard
	
	I1006 14:21:48.592576  806109 out.go:179] * Verifying csi-hostpath-driver addon...
	I1006 14:21:48.597670  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1006 14:21:48.614206  806109 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1006 14:21:48.647358  806109 api_server.go:141] control plane version: v1.34.1
	I1006 14:21:48.647389  806109 api_server.go:131] duration metric: took 62.022744ms to wait for apiserver health ...
	I1006 14:21:48.647399  806109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 14:21:48.648507  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:48.648899  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:48.690542  806109 system_pods.go:59] 19 kube-system pods found
	I1006 14:21:48.690881  806109 system_pods.go:61] "coredns-66bc5c9577-5b26c" [b2fadab4-223c-4127-ae78-2734411d72b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:21:48.690920  806109 system_pods.go:61] "coredns-66bc5c9577-z6nm4" [7fc2de03-9a40-4426-8af4-1216ed30bad3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1006 14:21:48.690960  806109 system_pods.go:61] "csi-hostpath-attacher-0" [f5fb1d05-3f2a-4b8a-b2ed-df5688d53301] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 14:21:48.690990  806109 system_pods.go:61] "csi-hostpath-resizer-0" [03b524e2-88a1-4c1c-9014-8b60efd178c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 14:21:48.691016  806109 system_pods.go:61] "csi-hostpathplugin-jdxpx" [dee0a0f1-55fc-4b8c-8e11-deef46bcb09b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 14:21:48.691053  806109 system_pods.go:61] "etcd-addons-006450" [68d8971d-a245-46ed-aeea-b6c95eaaa5a1] Running
	I1006 14:21:48.691073  806109 system_pods.go:61] "kube-apiserver-addons-006450" [859fa9a9-9411-46dc-a7a4-6f90f229bcb7] Running
	I1006 14:21:48.691092  806109 system_pods.go:61] "kube-controller-manager-addons-006450" [de781030-92f3-4acc-81f4-6ea4d01e03a7] Running
	I1006 14:21:48.691138  806109 system_pods.go:61] "kube-ingress-dns-minikube" [ed71a121-1938-4fcd-98ba-91506484a2ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 14:21:48.691163  806109 system_pods.go:61] "kube-proxy-rr8rw" [081a658a-cae9-4fff-a7ca-ec779b247fb7] Running
	I1006 14:21:48.691184  806109 system_pods.go:61] "kube-scheduler-addons-006450" [74dbd2fb-a5c2-463a-b49f-0d6b7ab88301] Running
	I1006 14:21:48.691218  806109 system_pods.go:61] "metrics-server-85b7d694d7-s77t8" [4560d030-72e2-4fed-b2fb-5a3edfe4178c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 14:21:48.691244  806109 system_pods.go:61] "nvidia-device-plugin-daemonset-d29s2" [0c163bb6-be86-4968-b8c1-96839618f3ac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 14:21:48.691266  806109 system_pods.go:61] "registry-66898fdd98-btgr2" [90bfa3d6-9f89-4227-b3ef-d98d9fadd197] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 14:21:48.691302  806109 system_pods.go:61] "registry-creds-764b6fb674-gxwfl" [a8521a0d-ed5a-452c-9fe0-94e6798668f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 14:21:48.691330  806109 system_pods.go:61] "registry-proxy-wd7b6" [638a84e3-6fae-4413-aa77-31014a85ff29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 14:21:48.691354  806109 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6bdv2" [4cd0ea0b-af7f-46f8-bd9b-8082dfd0fba4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.691391  806109 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8stqh" [d3201aa7-7b51-4180-abc6-274d440ee6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.691417  806109 system_pods.go:61] "storage-provisioner" [8e59991a-c6eb-407e-bacd-d535ad3d89b9] Running
	I1006 14:21:48.691437  806109 system_pods.go:74] duration metric: took 44.032107ms to wait for pod list to return data ...
	I1006 14:21:48.691473  806109 default_sa.go:34] waiting for default service account to be created ...
	I1006 14:21:48.690844  806109 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1006 14:21:48.691711  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:48.780129  806109 default_sa.go:45] found service account: "default"
	I1006 14:21:48.780207  806109 default_sa.go:55] duration metric: took 88.709889ms for default service account to be created ...
	I1006 14:21:48.780231  806109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 14:21:48.888790  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.132593822s)
	W1006 14:21:48.888876  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:48.888908  806109 retry.go:31] will retry after 467.080472ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:48.888970  806109 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.101487907s)
	I1006 14:21:48.892596  806109 system_pods.go:86] 19 kube-system pods found
	I1006 14:21:48.892682  806109 system_pods.go:89] "coredns-66bc5c9577-5b26c" [b2fadab4-223c-4127-ae78-2734411d72b5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:21:48.892707  806109 system_pods.go:89] "coredns-66bc5c9577-z6nm4" [7fc2de03-9a40-4426-8af4-1216ed30bad3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1006 14:21:48.892729  806109 system_pods.go:89] "csi-hostpath-attacher-0" [f5fb1d05-3f2a-4b8a-b2ed-df5688d53301] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1006 14:21:48.892769  806109 system_pods.go:89] "csi-hostpath-resizer-0" [03b524e2-88a1-4c1c-9014-8b60efd178c5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1006 14:21:48.892792  806109 system_pods.go:89] "csi-hostpathplugin-jdxpx" [dee0a0f1-55fc-4b8c-8e11-deef46bcb09b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1006 14:21:48.892812  806109 system_pods.go:89] "etcd-addons-006450" [68d8971d-a245-46ed-aeea-b6c95eaaa5a1] Running
	I1006 14:21:48.892844  806109 system_pods.go:89] "kube-apiserver-addons-006450" [859fa9a9-9411-46dc-a7a4-6f90f229bcb7] Running
	I1006 14:21:48.892868  806109 system_pods.go:89] "kube-controller-manager-addons-006450" [de781030-92f3-4acc-81f4-6ea4d01e03a7] Running
	I1006 14:21:48.892892  806109 system_pods.go:89] "kube-ingress-dns-minikube" [ed71a121-1938-4fcd-98ba-91506484a2ba] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1006 14:21:48.892925  806109 system_pods.go:89] "kube-proxy-rr8rw" [081a658a-cae9-4fff-a7ca-ec779b247fb7] Running
	I1006 14:21:48.892962  806109 system_pods.go:89] "kube-scheduler-addons-006450" [74dbd2fb-a5c2-463a-b49f-0d6b7ab88301] Running
	I1006 14:21:48.892984  806109 system_pods.go:89] "metrics-server-85b7d694d7-s77t8" [4560d030-72e2-4fed-b2fb-5a3edfe4178c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1006 14:21:48.893021  806109 system_pods.go:89] "nvidia-device-plugin-daemonset-d29s2" [0c163bb6-be86-4968-b8c1-96839618f3ac] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1006 14:21:48.893045  806109 system_pods.go:89] "registry-66898fdd98-btgr2" [90bfa3d6-9f89-4227-b3ef-d98d9fadd197] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1006 14:21:48.893080  806109 system_pods.go:89] "registry-creds-764b6fb674-gxwfl" [a8521a0d-ed5a-452c-9fe0-94e6798668f2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1006 14:21:48.893105  806109 system_pods.go:89] "registry-proxy-wd7b6" [638a84e3-6fae-4413-aa77-31014a85ff29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1006 14:21:48.893126  806109 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6bdv2" [4cd0ea0b-af7f-46f8-bd9b-8082dfd0fba4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.893161  806109 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8stqh" [d3201aa7-7b51-4180-abc6-274d440ee6d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1006 14:21:48.893183  806109 system_pods.go:89] "storage-provisioner" [8e59991a-c6eb-407e-bacd-d535ad3d89b9] Running
	I1006 14:21:48.893204  806109 system_pods.go:126] duration metric: took 112.954104ms to wait for k8s-apps to be running ...
	I1006 14:21:48.893238  806109 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 14:21:48.893331  806109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:21:48.893436  806109 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1006 14:21:48.897290  806109 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1006 14:21:48.900672  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1006 14:21:48.900752  806109 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1006 14:21:48.942085  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1006 14:21:48.960118  806109 system_svc.go:56] duration metric: took 66.871905ms WaitForService to wait for kubelet
	I1006 14:21:48.960199  806109 kubeadm.go:586] duration metric: took 14.820620987s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:21:48.960231  806109 node_conditions.go:102] verifying NodePressure condition ...
	I1006 14:21:48.965554  806109 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 14:21:48.965640  806109 node_conditions.go:123] node cpu capacity is 2
	I1006 14:21:48.965667  806109 node_conditions.go:105] duration metric: took 5.41607ms to run NodePressure ...
	I1006 14:21:48.965693  806109 start.go:241] waiting for startup goroutines ...
	I1006 14:21:48.984429  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1006 14:21:48.984493  806109 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1006 14:21:49.062891  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:49.063409  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:49.102274  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:49.109468  806109 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 14:21:49.109495  806109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1006 14:21:49.163209  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1006 14:21:49.357126  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:49.543241  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:49.545480  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:49.602876  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:50.041860  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:50.044347  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:50.102201  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:50.541424  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:50.543788  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:50.625651  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.006456  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.064277984s)
	I1006 14:21:51.006543  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.84331281s)
	I1006 14:21:51.010142  806109 addons.go:479] Verifying addon gcp-auth=true in "addons-006450"
	I1006 14:21:51.025044  806109 out.go:179] * Verifying gcp-auth addon...
	I1006 14:21:51.032841  806109 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1006 14:21:51.036529  806109 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1006 14:21:51.036555  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:51.042265  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:51.044526  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:51.102619  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.536647  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:51.544904  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:51.545440  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:51.602200  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:51.864284  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.507114739s)
	W1006 14:21:51.864377  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:51.864433  806109 retry.go:31] will retry after 615.286821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:52.037094  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:52.041054  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:52.043625  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:52.101572  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:52.479941  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:52.536478  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:52.541425  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:52.543774  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:52.600990  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:53.035872  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:53.041098  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:53.043636  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:53.101845  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:53.536239  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:53.536598  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.05658149s)
	W1006 14:21:53.536657  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.536695  806109 retry.go:31] will retry after 1.187113289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:53.541601  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:53.543552  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:53.602095  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.037487  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:54.042200  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:54.045343  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:54.102498  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.537542  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:54.542167  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:54.544351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:54.602290  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:54.724667  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:55.036372  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:55.043120  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:55.044769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:55.101792  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:55.536221  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:55.541111  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:55.543457  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:55.601561  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:55.840769  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.116063398s)
	W1006 14:21:55.840813  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:55.840833  806109 retry.go:31] will retry after 947.610718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:56.036387  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:56.043063  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:56.044685  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:56.101635  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:56.536456  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:56.541501  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:56.543585  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:56.601983  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:56.789245  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:57.036659  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:57.042057  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:57.044676  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:57.102243  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:57.537164  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:57.543103  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:57.544004  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:57.601850  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:57.839191  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.049904578s)
	W1006 14:21:57.839238  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:57.839258  806109 retry.go:31] will retry after 1.03292313s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:58.037616  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:58.041961  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:58.044496  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:58.107912  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:58.536745  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:58.540665  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:58.544634  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:58.601133  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:58.872574  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:21:59.036224  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:59.041408  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:59.044098  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:59.101370  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:59.536626  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:21:59.542541  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:21:59.543654  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:21:59.601836  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:21:59.922791  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.050177986s)
	W1006 14:21:59.922823  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:21:59.922842  806109 retry.go:31] will retry after 2.488598562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:00.043764  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:00.064604  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:00.065064  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:00.129394  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:00.537107  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:00.541010  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:00.543818  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:00.628309  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:01.036861  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:01.043610  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:01.046494  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:01.102249  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:01.537399  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:01.541534  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:01.543844  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:01.601153  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:02.038594  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:02.041768  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:02.044895  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:02.102517  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:02.411855  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:02.535770  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:02.540865  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:02.544524  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:02.601881  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.036514  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:03.041497  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:03.043732  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:03.101053  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.551361  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:03.551723  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:03.552096  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:03.607741  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:03.821574  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.409680153s)
	W1006 14:22:03.821607  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:03.821626  806109 retry.go:31] will retry after 2.808613429s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:04.036608  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:04.042059  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:04.044591  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:04.102238  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:04.537121  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:04.541031  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:04.544043  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:04.638355  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:05.045826  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:05.045915  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:05.046027  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:05.103126  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:05.536935  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:05.541096  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:05.543811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:05.601370  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.037342  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:06.048770  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:06.049575  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:06.102090  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.537158  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:06.541167  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:06.544718  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:06.601939  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:06.631301  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:07.036903  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:07.041275  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:07.046171  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:07.101990  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:07.537306  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:07.542954  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:07.548030  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:07.602151  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:08.038923  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:08.045713  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:08.048165  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:08.138614  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:08.453750  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.822414187s)
	W1006 14:22:08.453835  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.453869  806109 retry.go:31] will retry after 8.425837281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:08.536134  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:08.541309  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:08.543203  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:08.601173  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:09.037059  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:09.041277  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:09.043958  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:09.106411  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:09.536191  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:09.540957  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:09.543212  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:09.637335  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:10.038746  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:10.041203  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:10.043968  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:10.101414  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:10.535919  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:10.541593  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:10.544180  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:10.601144  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:11.036181  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:11.041258  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:11.043931  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:11.102062  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:11.536161  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:11.541576  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:11.545106  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:11.601994  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:12.037286  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:12.041743  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:12.043857  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:12.101936  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:12.536252  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:12.542977  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:12.544737  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1006 14:22:12.602418  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:13.037636  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:13.043353  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:13.045541  806109 kapi.go:107] duration metric: took 29.504656348s to wait for kubernetes.io/minikube-addons=registry ...
	I1006 14:22:13.103856  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:13.536010  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:13.541542  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:13.602453  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:14.036799  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:14.041118  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:14.101847  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:14.535955  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:14.540895  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:14.601210  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:15.038047  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:15.042436  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:15.101780  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:15.536551  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:15.541754  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:15.601384  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.036266  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:16.041349  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:16.101883  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.535728  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:16.540993  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:16.601091  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:16.880118  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:17.036213  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:17.041368  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:17.102032  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:17.536149  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:17.541821  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:17.606226  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:18.037103  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:18.041146  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:18.102447  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:18.125066  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.244891148s)
	W1006 14:22:18.125106  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:18.125137  806109 retry.go:31] will retry after 8.394227584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:18.536459  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:18.541489  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:18.602140  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:19.036341  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:19.041843  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:19.101573  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:19.536129  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:19.541594  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:19.603991  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:20.036705  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:20.040761  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:20.101466  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:20.536346  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:20.541417  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:20.602109  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:21.037009  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:21.042008  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:21.103192  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:21.536872  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:21.545192  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:21.603991  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:22.036447  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:22.041450  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:22.101387  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:22.537530  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:22.547087  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:22.602381  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:23.038711  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:23.047024  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:23.102246  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:23.537465  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:23.542053  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:23.602575  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:24.037716  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:24.041932  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:24.105425  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:24.537009  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:24.540996  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:24.601164  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:25.037218  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:25.041462  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:25.101898  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:25.541274  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:25.541617  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:25.601533  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:26.037202  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:26.041027  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:26.101243  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:26.520530  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:26.537318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:26.541434  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:26.602288  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.036799  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:27.040735  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:27.101318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.536660  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:27.540656  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:27.601312  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:27.622677  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.102107139s)
	W1006 14:22:27.622764  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:27.622799  806109 retry.go:31] will retry after 8.964562377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:28.036352  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:28.041655  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:28.101317  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:28.536873  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:28.542495  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:28.601848  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:29.037235  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:29.041321  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:29.101529  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:29.536608  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:29.541988  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:29.601332  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:30.067966  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:30.069628  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:30.102287  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:30.537456  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:30.541607  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:30.605527  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:31.047144  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:31.047366  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:31.102811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:31.540586  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:31.543600  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:31.601318  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:32.037351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:32.041560  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:32.101712  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:32.537074  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:32.541459  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:32.637575  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:33.037645  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:33.041762  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:33.101769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:33.537080  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:33.546252  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:33.602460  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:34.049083  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:34.059194  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:34.102644  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:34.536345  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:34.541231  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:34.602566  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:35.036474  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:35.041683  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:35.101153  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:35.536516  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:35.543131  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:35.601301  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:36.040029  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:36.041789  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:36.101554  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:36.536713  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:36.541523  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:36.587821  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1006 14:22:36.637573  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:37.036522  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:37.042208  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:37.101356  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:37.538450  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:37.541912  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:37.601423  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:38.039073  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:38.041963  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:38.107975  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:38.260560  806109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.672700487s)
	W1006 14:22:38.260650  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:38.260684  806109 retry.go:31] will retry after 28.502029632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:22:38.537841  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:38.541302  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:38.634080  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:39.042819  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:39.044710  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:39.101819  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:39.536317  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:39.541291  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:39.602171  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:40.063837  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:40.065152  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:40.160263  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:40.536517  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:40.541760  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:40.601589  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:41.035811  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:41.040992  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:41.101764  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:41.537386  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:41.541696  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:41.638626  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:42.041509  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:42.042425  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:42.102420  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:42.536866  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:42.540382  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:42.602008  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:43.036485  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:43.041855  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:43.104569  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:43.537538  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:43.541564  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:43.603912  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:44.036751  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:44.041644  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:44.100816  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:44.535598  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:44.540901  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:44.605465  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:45.067085  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:45.085831  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:45.104001  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:45.535733  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:45.541994  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:45.601937  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:46.037039  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:46.042662  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:46.100769  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:46.538350  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:46.542984  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:46.601745  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:47.036231  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:47.041572  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:47.101597  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:47.537411  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:47.541447  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:47.601925  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:48.036062  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:48.046387  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:48.106511  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:48.535973  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:48.541411  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:48.602406  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:49.082967  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:49.083089  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:49.101404  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:49.543349  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:49.543936  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:49.606022  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1006 14:22:50.052841  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:50.053282  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:50.101918  806109 kapi.go:107] duration metric: took 1m1.504246684s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1006 14:22:50.536780  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:50.540713  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:51.039833  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:51.041873  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:51.536470  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:51.541280  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:52.036677  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:52.041641  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:52.536085  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:52.540908  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:53.036694  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:53.041925  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:53.536756  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:53.541339  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:54.036706  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:54.041617  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:54.536485  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:54.541468  806109 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1006 14:22:55.054778  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:55.076569  806109 kapi.go:107] duration metric: took 1m11.538807076s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1006 14:22:55.536329  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:56.036624  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:56.535976  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:57.036354  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:57.536109  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:58.037892  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:58.536442  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:59.037351  806109 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1006 14:22:59.536233  806109 kapi.go:107] duration metric: took 1m8.503389262s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1006 14:22:59.539324  806109 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-006450 cluster.
	I1006 14:22:59.542088  806109 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1006 14:22:59.544863  806109 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1006 14:23:06.763823  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 14:23:07.625986  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:23:07.626019  806109 retry.go:31] will retry after 17.722294339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1006 14:23:25.349291  806109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1006 14:23:26.187865  806109 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1006 14:23:26.187971  806109 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1006 14:23:26.191145  806109 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, volcano, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1006 14:23:26.193747  806109 addons.go:514] duration metric: took 1m52.052915825s for enable addons: enabled=[registry-creds amd-gpu-device-plugin cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher volcano metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1006 14:23:26.193810  806109 start.go:246] waiting for cluster config update ...
	I1006 14:23:26.193839  806109 start.go:255] writing updated cluster config ...
	I1006 14:23:26.194174  806109 ssh_runner.go:195] Run: rm -f paused
	I1006 14:23:26.198700  806109 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:23:26.203281  806109 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5b26c" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.213859  806109 pod_ready.go:94] pod "coredns-66bc5c9577-5b26c" is "Ready"
	I1006 14:23:26.213893  806109 pod_ready.go:86] duration metric: took 10.577014ms for pod "coredns-66bc5c9577-5b26c" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.216571  806109 pod_ready.go:83] waiting for pod "etcd-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.223509  806109 pod_ready.go:94] pod "etcd-addons-006450" is "Ready"
	I1006 14:23:26.223539  806109 pod_ready.go:86] duration metric: took 6.938313ms for pod "etcd-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.226276  806109 pod_ready.go:83] waiting for pod "kube-apiserver-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.230877  806109 pod_ready.go:94] pod "kube-apiserver-addons-006450" is "Ready"
	I1006 14:23:26.230912  806109 pod_ready.go:86] duration metric: took 4.607653ms for pod "kube-apiserver-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.233246  806109 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.603009  806109 pod_ready.go:94] pod "kube-controller-manager-addons-006450" is "Ready"
	I1006 14:23:26.603041  806109 pod_ready.go:86] duration metric: took 369.767385ms for pod "kube-controller-manager-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:26.803580  806109 pod_ready.go:83] waiting for pod "kube-proxy-rr8rw" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.202844  806109 pod_ready.go:94] pod "kube-proxy-rr8rw" is "Ready"
	I1006 14:23:27.202872  806109 pod_ready.go:86] duration metric: took 399.265658ms for pod "kube-proxy-rr8rw" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.402987  806109 pod_ready.go:83] waiting for pod "kube-scheduler-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.803050  806109 pod_ready.go:94] pod "kube-scheduler-addons-006450" is "Ready"
	I1006 14:23:27.803077  806109 pod_ready.go:86] duration metric: took 400.059334ms for pod "kube-scheduler-addons-006450" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:23:27.803090  806109 pod_ready.go:40] duration metric: took 1.604355795s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:23:27.868687  806109 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 14:23:27.871326  806109 out.go:179] * Done! kubectl is now configured to use "addons-006450" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 06 14:41:29 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:41:29Z" level=error msg="error getting RW layer size for container ID 'd3025a0e45236953ba92b68c185b524d2d21666ea84574c4e4446438b791a562': Error response from daemon: No such container: d3025a0e45236953ba92b68c185b524d2d21666ea84574c4e4446438b791a562"
	Oct 06 14:41:29 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:41:29Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd3025a0e45236953ba92b68c185b524d2d21666ea84574c4e4446438b791a562'"
	Oct 06 14:41:32 addons-006450 dockerd[1123]: time="2025-10-06T14:41:32.875177696Z" level=info msg="ignoring event" container=aa8b68706bef2c4c58cb5e74a583c9bc1537096958fe895dc98845fd2ed6bb4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:41:33 addons-006450 dockerd[1123]: time="2025-10-06T14:41:33.017951819Z" level=info msg="ignoring event" container=48071d8f52e3be841861d6ca9a7073e2c630be334a6611801df617b8ba2c8627 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:41:37 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:41:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f3cb37de2aedd60f554a014ae890baa2fe238073ac467ad53a68c6d257945d4/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:41:37 addons-006450 dockerd[1123]: time="2025-10-06T14:41:37.408882995Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:41:37 addons-006450 dockerd[1123]: time="2025-10-06T14:41:37.602172240Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:41:37 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:41:37Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Oct 06 14:41:39 addons-006450 dockerd[1123]: time="2025-10-06T14:41:39.369834703Z" level=info msg="ignoring event" container=8cf5351cc46428be3f868b246cec4a4a4acb3967888ebc197ad29040016667f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:41:39 addons-006450 dockerd[1123]: time="2025-10-06T14:41:39.499542673Z" level=info msg="ignoring event" container=4916510c10c2b58685b6635d616ec7508e4267a6a3d2e1044dcda24b823a0179 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:41:52 addons-006450 dockerd[1123]: time="2025-10-06T14:41:52.810550396Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:41:52 addons-006450 dockerd[1123]: time="2025-10-06T14:41:52.903610709Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:42:19 addons-006450 dockerd[1123]: time="2025-10-06T14:42:19.806505003Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:42:19 addons-006450 dockerd[1123]: time="2025-10-06T14:42:19.905932125Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:43:10 addons-006450 dockerd[1123]: time="2025-10-06T14:43:10.808788300Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:43:10 addons-006450 dockerd[1123]: time="2025-10-06T14:43:10.907778915Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:43:37 addons-006450 dockerd[1123]: time="2025-10-06T14:43:37.520169739Z" level=info msg="ignoring event" container=6f3cb37de2aedd60f554a014ae890baa2fe238073ac467ad53a68c6d257945d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:44:06 addons-006450 dockerd[1123]: time="2025-10-06T14:44:06.078736415Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:44:06 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:44:06Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 06 14:44:07 addons-006450 cri-dockerd[1424]: time="2025-10-06T14:44:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/24ba9cd5c45fc00488ce62e039fdcb737e3e215db8f7c879c262313d5073da08/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:44:08 addons-006450 dockerd[1123]: time="2025-10-06T14:44:08.031588842Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:44:08 addons-006450 dockerd[1123]: time="2025-10-06T14:44:08.115552167Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:44:17 addons-006450 dockerd[1123]: time="2025-10-06T14:44:17.977146131Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:44:21 addons-006450 dockerd[1123]: time="2025-10-06T14:44:21.804755411Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:44:21 addons-006450 dockerd[1123]: time="2025-10-06T14:44:21.912974233Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                       NAMESPACE
	ffe6a9017df48       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                 12 minutes ago      Running             busybox                   0                   311174277f416       busybox                                   default
	509e7623ba228       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5   21 minutes ago      Running             gadget                    0                   14032f9fa6ab7       gadget-mwfpm                              gadget
	7c848b41913dc       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246              22 minutes ago      Running             local-path-provisioner    0                   dd0d4f86343b0       local-path-provisioner-648f6765c9-fmrx9   local-path-storage
	59bd3def26ae0       ba04bb24b9575                                                                                                       22 minutes ago      Running             storage-provisioner       0                   a23e97739eb30       storage-provisioner                       kube-system
	1f08a0b17053c       138784d87c9c5                                                                                                       22 minutes ago      Running             coredns                   0                   41c06ea8e8dab       coredns-66bc5c9577-5b26c                  kube-system
	2c89530d2d498       05baa95f5142d                                                                                                       22 minutes ago      Running             kube-proxy                0                   3401ff6190b48       kube-proxy-rr8rw                          kube-system
	9184b772f37f1       7eb2c6ff0c5a7                                                                                                       23 minutes ago      Running             kube-controller-manager   0                   431c21e60ec20       kube-controller-manager-addons-006450     kube-system
	16d61d5012e7c       b5f57ec6b9867                                                                                                       23 minutes ago      Running             kube-scheduler            0                   a52e4c8396f58       kube-scheduler-addons-006450              kube-system
	e5031a852e78a       43911e833d64d                                                                                                       23 minutes ago      Running             kube-apiserver            0                   dc93b2d9f3eda       kube-apiserver-addons-006450              kube-system
	57ec1a2227a7f       a1894772a478e                                                                                                       23 minutes ago      Running             etcd                      0                   31b1c12560e88       etcd-addons-006450                        kube-system
	
	
	==> coredns [1f08a0b17053] <==
	[INFO] 10.244.0.7:56542 - 33336 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002392707s
	[INFO] 10.244.0.7:56542 - 54232 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000178786s
	[INFO] 10.244.0.7:56542 - 7333 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000136449s
	[INFO] 10.244.0.7:33056 - 46078 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000279019s
	[INFO] 10.244.0.7:33056 - 46299 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000298719s
	[INFO] 10.244.0.7:56424 - 24690 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0002456s
	[INFO] 10.244.0.7:56424 - 24468 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000268837s
	[INFO] 10.244.0.7:59046 - 6419 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000205798s
	[INFO] 10.244.0.7:59046 - 6231 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000164198s
	[INFO] 10.244.0.7:57987 - 61663 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001803179s
	[INFO] 10.244.0.7:57987 - 61843 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002072492s
	[INFO] 10.244.0.7:52614 - 11017 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000243541s
	[INFO] 10.244.0.7:52614 - 10853 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000192292s
	[INFO] 10.244.0.26:44951 - 63731 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000272135s
	[INFO] 10.244.0.26:43415 - 16328 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000118021s
	[INFO] 10.244.0.26:39889 - 25486 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139116s
	[INFO] 10.244.0.26:39105 - 18081 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154197s
	[INFO] 10.244.0.26:56273 - 11862 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000274474s
	[INFO] 10.244.0.26:44777 - 21446 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000313833s
	[INFO] 10.244.0.26:47488 - 37580 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00207181s
	[INFO] 10.244.0.26:50437 - 7597 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001591703s
	[INFO] 10.244.0.26:49063 - 42612 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001943943s
	[INFO] 10.244.0.26:39378 - 64309 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00241089s
	[INFO] 10.244.0.30:44861 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00027604s
	[INFO] 10.244.0.30:48981 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134175s
	
	
	==> describe nodes <==
	Name:               addons-006450
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-006450
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=addons-006450
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T14_21_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-006450
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 14:21:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-006450
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 14:44:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 14:43:34 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 14:43:34 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 14:43:34 +0000   Mon, 06 Oct 2025 14:21:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 14:43:34 +0000   Mon, 06 Oct 2025 14:21:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-006450
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0364ef7d33ec438ea80b3763bd3b6ccc
	  System UUID:                35426571-e524-4094-b847-4e5d39cdb9e6
	  Boot ID:                    2fc2fcec-a145-448c-8b5d-9e614a6ff2df
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-mwfpm                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-66bc5c9577-5b26c                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     22m
	  kube-system                 etcd-addons-006450                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         22m
	  kube-system                 kube-apiserver-addons-006450                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-addons-006450                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-rr8rw                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-addons-006450                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  local-path-storage          helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-fmrx9                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 22m                kube-proxy       
	  Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 23m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node addons-006450 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node addons-006450 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node addons-006450 status is now: NodeHasSufficientPID
	  Normal   Starting                 23m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 22m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 22m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  22m                kubelet          Node addons-006450 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22m                kubelet          Node addons-006450 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22m                kubelet          Node addons-006450 status is now: NodeHasSufficientPID
	  Normal   NodeReady                22m                kubelet          Node addons-006450 status is now: NodeReady
	  Normal   RegisteredNode           22m                node-controller  Node addons-006450 event: Registered Node addons-006450 in Controller
	
	
	==> dmesg <==
	[Oct 6 12:53] systemd-journald[226]: Failed to send stream file descriptor to service manager: Connection refused
	[Oct 6 12:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 13:11] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 14:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [57ec1a2227a7] <==
	{"level":"warn","ts":"2025-10-06T14:21:25.304509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:49.763248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:21:49.777779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.281548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.337199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.387982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.452451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.481768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.595747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.614909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.631591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.664368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.680487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.697752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.764439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:22:03.772435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T14:31:23.319583Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1712}
	{"level":"info","ts":"2025-10-06T14:31:23.387482Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1712,"took":"67.368638ms","hash":2638762742,"current-db-size-bytes":8409088,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4431872,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2025-10-06T14:31:23.387544Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2638762742,"revision":1712,"compact-revision":-1}
	{"level":"info","ts":"2025-10-06T14:36:23.326234Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2212}
	{"level":"info","ts":"2025-10-06T14:36:23.346470Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2212,"took":"19.456428ms","hash":564227051,"current-db-size-bytes":8409088,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":5521408,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2025-10-06T14:36:23.346528Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":564227051,"revision":2212,"compact-revision":1712}
	{"level":"info","ts":"2025-10-06T14:41:23.338861Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":3098}
	{"level":"info","ts":"2025-10-06T14:41:23.362008Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":3098,"took":"22.398831ms","hash":3350591826,"current-db-size-bytes":8409088,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":3801088,"current-db-size-in-use":"3.8 MB"}
	{"level":"info","ts":"2025-10-06T14:41:23.362059Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3350591826,"revision":3098,"compact-revision":2212}
	
	
	==> kernel <==
	 14:44:22 up 21:26,  0 user,  load average: 1.23, 1.00, 1.44
	Linux addons-006450 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [e5031a852e78] <==
	W1006 14:32:03.182490       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1006 14:32:03.230092       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1006 14:32:04.084579       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1006 14:32:04.275777       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1006 14:32:21.681568       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33172: use of closed network connection
	E1006 14:32:22.105912       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33216: use of closed network connection
	I1006 14:32:31.969654       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.103.92"}
	I1006 14:33:02.323607       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1006 14:33:02.632504       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.99.116"}
	I1006 14:33:18.931472       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1006 14:39:12.704082       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 14:39:12.704131       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 14:39:12.737069       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 14:39:12.737126       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 14:39:12.753776       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 14:39:12.753816       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 14:39:12.823015       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 14:39:12.823372       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1006 14:39:12.862406       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1006 14:39:12.862457       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1006 14:39:13.754468       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1006 14:39:13.862749       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1006 14:39:13.997181       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1006 14:41:07.482295       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I1006 14:41:26.220285       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [9184b772f37f] <==
	E1006 14:43:38.503466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:43:39.882991       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:43:39.884302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:43:40.612386       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:43:40.613751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:43:41.756270       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:43:41.757354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:43:46.024160       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:43:46.025575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:43:48.616152       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:43:48.617559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:43:52.377374       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:43:52.378491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:43:56.114099       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:43:56.115693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:44:00.460864       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:44:00.462472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:44:11.396235       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:44:11.397425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:44:11.833157       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:44:11.834292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:44:12.963931       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:44:12.965141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1006 14:44:20.679601       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1006 14:44:20.680817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [2c89530d2d49] <==
	I1006 14:21:35.738189       1 server_linux.go:53] "Using iptables proxy"
	I1006 14:21:35.837556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 14:21:35.938392       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 14:21:35.938475       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1006 14:21:35.938596       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 14:21:36.026114       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 14:21:36.026170       1 server_linux.go:132] "Using iptables Proxier"
	I1006 14:21:36.061180       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 14:21:36.061523       1 server.go:527] "Version info" version="v1.34.1"
	I1006 14:21:36.061547       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:21:36.062743       1 config.go:200] "Starting service config controller"
	I1006 14:21:36.062767       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 14:21:36.063897       1 config.go:106] "Starting endpoint slice config controller"
	I1006 14:21:36.063910       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 14:21:36.063943       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 14:21:36.063947       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 14:21:36.064746       1 config.go:309] "Starting node config controller"
	I1006 14:21:36.064764       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 14:21:36.064771       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 14:21:36.163641       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 14:21:36.164636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 14:21:36.164662       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [16d61d5012e7] <==
	I1006 14:21:27.016131       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:21:27.020700       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:21:27.020968       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:21:27.021894       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 14:21:27.023886       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1006 14:21:27.029893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1006 14:21:27.030068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1006 14:21:27.038289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1006 14:21:27.038473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1006 14:21:27.038518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1006 14:21:27.038557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1006 14:21:27.040442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1006 14:21:27.040803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1006 14:21:27.040860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1006 14:21:27.040908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1006 14:21:27.040970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1006 14:21:27.041025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1006 14:21:27.041090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1006 14:21:27.041145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1006 14:21:27.041189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1006 14:21:27.041328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1006 14:21:27.041374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1006 14:21:27.041451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1006 14:21:27.041497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1006 14:21:28.621743       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 06 14:43:48 addons-006450 kubelet[2258]: E1006 14:43:48.752543    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:43:53 addons-006450 kubelet[2258]: E1006 14:43:53.754065    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:44:03 addons-006450 kubelet[2258]: E1006 14:44:03.751802    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:44:06 addons-006450 kubelet[2258]: E1006 14:44:06.082928    2258 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 06 14:44:06 addons-006450 kubelet[2258]: E1006 14:44:06.082984    2258 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 06 14:44:06 addons-006450 kubelet[2258]: E1006 14:44:06.083064    2258 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(effafea4-bd61-4243-a42c-72930366d494): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 06 14:44:06 addons-006450 kubelet[2258]: E1006 14:44:06.083104    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:44:07 addons-006450 kubelet[2258]: I1006 14:44:07.618471    2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x75tn\" (UniqueName: \"kubernetes.io/projected/e3235639-0706-408c-84cf-b2ea03177f50-kube-api-access-x75tn\") pod \"helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f\" (UID: \"e3235639-0706-408c-84cf-b2ea03177f50\") " pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f"
	Oct 06 14:44:07 addons-006450 kubelet[2258]: I1006 14:44:07.618542    2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/e3235639-0706-408c-84cf-b2ea03177f50-data\") pod \"helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f\" (UID: \"e3235639-0706-408c-84cf-b2ea03177f50\") " pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f"
	Oct 06 14:44:07 addons-006450 kubelet[2258]: I1006 14:44:07.618565    2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/e3235639-0706-408c-84cf-b2ea03177f50-script\") pod \"helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f\" (UID: \"e3235639-0706-408c-84cf-b2ea03177f50\") " pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f"
	Oct 06 14:44:08 addons-006450 kubelet[2258]: E1006 14:44:08.120011    2258 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:44:08 addons-006450 kubelet[2258]: E1006 14:44:08.120088    2258 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:44:08 addons-006450 kubelet[2258]: E1006 14:44:08.120389    2258 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f_local-path-storage(e3235639-0706-408c-84cf-b2ea03177f50): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 06 14:44:08 addons-006450 kubelet[2258]: E1006 14:44:08.120441    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f" podUID="e3235639-0706-408c-84cf-b2ea03177f50"
	Oct 06 14:44:08 addons-006450 kubelet[2258]: E1006 14:44:08.256313    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f" podUID="e3235639-0706-408c-84cf-b2ea03177f50"
	Oct 06 14:44:13 addons-006450 kubelet[2258]: W1006 14:44:13.877236    2258 logging.go:55] [core] [Channel #80 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Oct 06 14:44:17 addons-006450 kubelet[2258]: E1006 14:44:17.981045    2258 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 06 14:44:17 addons-006450 kubelet[2258]: E1006 14:44:17.981103    2258 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 06 14:44:17 addons-006450 kubelet[2258]: E1006 14:44:17.981207    2258 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(6e933703-adb9-4036-9530-9f2296a30c95): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 06 14:44:17 addons-006450 kubelet[2258]: E1006 14:44:17.981237    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="6e933703-adb9-4036-9530-9f2296a30c95"
	Oct 06 14:44:20 addons-006450 kubelet[2258]: E1006 14:44:20.755506    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="effafea4-bd61-4243-a42c-72930366d494"
	Oct 06 14:44:21 addons-006450 kubelet[2258]: E1006 14:44:21.915770    2258 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:44:21 addons-006450 kubelet[2258]: E1006 14:44:21.915824    2258 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 06 14:44:21 addons-006450 kubelet[2258]: E1006 14:44:21.915902    2258 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f_local-path-storage(e3235639-0706-408c-84cf-b2ea03177f50): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 06 14:44:21 addons-006450 kubelet[2258]: E1006 14:44:21.915937    2258 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f" podUID="e3235639-0706-408c-84cf-b2ea03177f50"
	
	
	==> storage-provisioner [59bd3def26ae] <==
	W1006 14:43:56.804341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:43:58.807136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:43:58.812952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:00.816326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:00.823524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:02.827097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:02.831596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:04.834929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:04.840480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:06.843164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:06.850287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:08.854897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:08.862644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:10.865788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:10.870655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:12.873604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:12.878198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:14.881689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:14.886331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:16.889485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:16.894588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:18.899073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:18.903846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:20.906858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:44:20.911279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-006450 -n addons-006450
helpers_test.go:269: (dbg) Run:  kubectl --context addons-006450 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-006450 describe pod nginx task-pv-pod test-local-path helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-006450 describe pod nginx task-pv-pod test-local-path helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f: exit status 1 (106.374493ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-006450/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:33:02 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6jbnj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6jbnj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/nginx to addons-006450
	  Warning  Failed     9m45s (x3 over 11m)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    8m15s (x5 over 11m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     8m14s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Warning  Failed     8m14s (x2 over 10m)  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    71s (x44 over 11m)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     71s (x44 over 11m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-006450/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:33:09 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zxjwt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-zxjwt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason              Age                  From                     Message
	  ----     ------              ----                 ----                     -------
	  Normal   Scheduled           11m                  default-scheduler        Successfully assigned default/task-pv-pod to addons-006450
	  Normal   Pulling             8m11s (x5 over 11m)  kubelet                  Pulling image "docker.io/nginx"
	  Warning  Failed              8m11s (x5 over 11m)  kubelet                  Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed              8m11s (x5 over 11m)  kubelet                  Error: ErrImagePull
	  Warning  FailedAttachVolume  66s (x2 over 3m7s)   attachdetach-controller  AttachVolume.Attach failed for volume "pvc-8cd4c658-d85f-400e-b690-afd6f73b4d07" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 60dd71e0-a2c1-11f0-8679-524208120bcb
	  Normal   BackOff             62s (x42 over 11m)   kubelet                  Back-off pulling image "docker.io/nginx"
	  Warning  Failed              62s (x42 over 11m)   kubelet                  Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2p7zd (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-2p7zd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-006450 describe pod nginx task-pv-pod test-local-path helper-pod-create-pvc-13df332f-1a27-405e-bce7-770e0006db8f: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.861750144s)
--- FAIL: TestAddons/parallel/LocalPath (345.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-933184 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-933184 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-933184 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-933184 --alsologtostderr -v=1] stderr:
I1006 14:59:45.257906  864606 out.go:360] Setting OutFile to fd 1 ...
I1006 14:59:45.259440  864606 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:59:45.259473  864606 out.go:374] Setting ErrFile to fd 2...
I1006 14:59:45.259481  864606 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 14:59:45.259888  864606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
I1006 14:59:45.260274  864606 mustload.go:65] Loading cluster: functional-933184
I1006 14:59:45.260785  864606 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 14:59:45.261326  864606 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
I1006 14:59:45.285953  864606 host.go:66] Checking if "functional-933184" exists ...
I1006 14:59:45.286313  864606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1006 14:59:45.403010  864606 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 14:59:45.390135511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1006 14:59:45.403402  864606 api_server.go:166] Checking apiserver status ...
I1006 14:59:45.403554  864606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1006 14:59:45.403647  864606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
I1006 14:59:45.430602  864606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
I1006 14:59:45.542163  864606 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9428/cgroup
I1006 14:59:45.555876  864606 api_server.go:182] apiserver freezer: "6:freezer:/docker/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/kubepods/burstable/pod91cf888353d94de48c6b65d71d238773/6fcdf6f551c14aa7467b4d4ea8e8350ce6a5abb32f1d837e1ca4b2d46c8ece03"
I1006 14:59:45.555968  864606 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/kubepods/burstable/pod91cf888353d94de48c6b65d71d238773/6fcdf6f551c14aa7467b4d4ea8e8350ce6a5abb32f1d837e1ca4b2d46c8ece03/freezer.state
I1006 14:59:45.564813  864606 api_server.go:204] freezer state: "THAWED"
I1006 14:59:45.564846  864606 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1006 14:59:45.574332  864606 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1006 14:59:45.574377  864606 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1006 14:59:45.574572  864606 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 14:59:45.574584  864606 addons.go:69] Setting dashboard=true in profile "functional-933184"
I1006 14:59:45.574601  864606 addons.go:238] Setting addon dashboard=true in "functional-933184"
I1006 14:59:45.574629  864606 host.go:66] Checking if "functional-933184" exists ...
I1006 14:59:45.575072  864606 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
I1006 14:59:45.606777  864606 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1006 14:59:45.609780  864606 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1006 14:59:45.612680  864606 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1006 14:59:45.612714  864606 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1006 14:59:45.612829  864606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
I1006 14:59:45.631624  864606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
I1006 14:59:45.738374  864606 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1006 14:59:45.738396  864606 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1006 14:59:45.752006  864606 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1006 14:59:45.752034  864606 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1006 14:59:45.766281  864606 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1006 14:59:45.766307  864606 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1006 14:59:45.783708  864606 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1006 14:59:45.783730  864606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1006 14:59:45.797711  864606 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1006 14:59:45.797734  864606 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1006 14:59:45.812083  864606 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1006 14:59:45.812115  864606 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1006 14:59:45.826541  864606 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1006 14:59:45.826565  864606 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1006 14:59:45.840626  864606 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1006 14:59:45.840653  864606 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1006 14:59:45.856208  864606 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1006 14:59:45.856254  864606 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1006 14:59:45.871895  864606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1006 14:59:46.852248  864606 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-933184 addons enable metrics-server

                                                
                                                
I1006 14:59:46.855770  864606 addons.go:201] Writing out "functional-933184" config to set dashboard=true...
W1006 14:59:46.856027  864606 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1006 14:59:46.856896  864606 kapi.go:59] client config for functional-933184: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt", KeyFile:"/home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.key", CAFile:"/home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1006 14:59:46.857441  864606 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1006 14:59:46.857472  864606 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1006 14:59:46.857493  864606 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1006 14:59:46.857515  864606 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1006 14:59:46.857537  864606 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1006 14:59:46.874423  864606 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  99da432e-7d59-4c2b-b9a1-c0d4f7105c9e 1543 0 2025-10-06 14:59:46 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-06 14:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.109.166.135,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.109.166.135],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1006 14:59:46.874612  864606 out.go:285] * Launching proxy ...
* Launching proxy ...
I1006 14:59:46.874711  864606 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-933184 proxy --port 36195]
I1006 14:59:46.875410  864606 dashboard.go:157] Waiting for kubectl to output host:port ...
I1006 14:59:46.953337  864606 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1006 14:59:46.953386  864606 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1006 14:59:46.971376  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f0ee4d69-0c85-4fc4-bd30-16bbf1cf2a71] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:46 GMT]] Body:0x40007c5dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000447a40 TLS:<nil>}
I1006 14:59:46.971446  864606 retry.go:31] will retry after 121.924µs: Temporary Error: unexpected response code: 503
I1006 14:59:46.975108  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ffa405cc-076b-484b-8335-d214682dae47] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:46 GMT]] Body:0x40007c5ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000447b80 TLS:<nil>}
I1006 14:59:46.975178  864606 retry.go:31] will retry after 91.855µs: Temporary Error: unexpected response code: 503
I1006 14:59:46.978919  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5ce317db-a765-4acc-9a0d-d550331a138d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:46 GMT]] Body:0x4001664000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000447cc0 TLS:<nil>}
I1006 14:59:46.978975  864606 retry.go:31] will retry after 264.065µs: Temporary Error: unexpected response code: 503
I1006 14:59:46.982659  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0092e89e-ab01-4c4f-a78e-7eab503d399f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:46 GMT]] Body:0x4001664080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000447e00 TLS:<nil>}
I1006 14:59:46.982727  864606 retry.go:31] will retry after 260.859µs: Temporary Error: unexpected response code: 503
I1006 14:59:46.986625  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[de0e255b-2afa-4fcf-ad4d-4f806e5e5f81] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:46 GMT]] Body:0x4001664100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002ee000 TLS:<nil>}
I1006 14:59:46.986685  864606 retry.go:31] will retry after 341.195µs: Temporary Error: unexpected response code: 503
I1006 14:59:46.990461  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c1ee22fe-230e-4809-adb4-61b99e17b380] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:46 GMT]] Body:0x4001664180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002ee140 TLS:<nil>}
I1006 14:59:46.990560  864606 retry.go:31] will retry after 492.688µs: Temporary Error: unexpected response code: 503
I1006 14:59:46.994368  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[14e02dc6-c03f-4981-8648-ae517442fcad] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:46 GMT]] Body:0x40015fc0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048f400 TLS:<nil>}
I1006 14:59:46.994431  864606 retry.go:31] will retry after 952.168µs: Temporary Error: unexpected response code: 503
I1006 14:59:46.998210  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[23a658e9-ae9c-4e2b-871a-31aa8a217540] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:46 GMT]] Body:0x40015fc140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048fcc0 TLS:<nil>}
I1006 14:59:46.998260  864606 retry.go:31] will retry after 1.827416ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.003608  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[19cec609-5088-4965-a357-62fa68fba256] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x40015fc1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400048fe00 TLS:<nil>}
I1006 14:59:47.003738  864606 retry.go:31] will retry after 1.755236ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.009407  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d23f5d8c-2bb7-4369-8c62-3f9df07ed40e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x40015fc240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000397540 TLS:<nil>}
I1006 14:59:47.009472  864606 retry.go:31] will retry after 2.678946ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.015732  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bcded5af-4592-41be-96e3-7bfae5c981dd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x4001664400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002ee280 TLS:<nil>}
I1006 14:59:47.015800  864606 retry.go:31] will retry after 4.782108ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.024241  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[619c59b4-2a60-459c-b410-09684f09899f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x40015fc340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000397680 TLS:<nil>}
I1006 14:59:47.024352  864606 retry.go:31] will retry after 9.470574ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.037693  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5fac09e9-4882-4b96-aa6b-1155a07bcdbf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x4001664500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002ee3c0 TLS:<nil>}
I1006 14:59:47.037753  864606 retry.go:31] will retry after 16.333534ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.059055  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5fe9abdc-1b5c-4dbc-bf6c-16775e086248] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x4001664580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002ee500 TLS:<nil>}
I1006 14:59:47.059136  864606 retry.go:31] will retry after 11.383104ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.074381  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a2c71313-d151-48a5-ae32-2f111bf7deb8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x40015fc4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003977c0 TLS:<nil>}
I1006 14:59:47.074465  864606 retry.go:31] will retry after 24.689224ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.102665  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8a1531bd-1c76-4a15-b69e-1ffe1ce69fb0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x40015fc540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000397900 TLS:<nil>}
I1006 14:59:47.102751  864606 retry.go:31] will retry after 51.223203ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.158342  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[79de8423-0ca7-4851-9f65-d359dd0e0b3e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x4001664700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002ee640 TLS:<nil>}
I1006 14:59:47.158438  864606 retry.go:31] will retry after 51.551866ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.214378  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f28f64bb-808a-4fd4-9d8c-83a0dce9bdaa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x40015fc640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000397a40 TLS:<nil>}
I1006 14:59:47.214467  864606 retry.go:31] will retry after 82.939086ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.301391  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[27f84d69-a2a2-4cd5-aeb9-b8850202a337] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x4001664800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002ee780 TLS:<nil>}
I1006 14:59:47.301488  864606 retry.go:31] will retry after 165.256517ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.471035  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d4ac904d-71e4-4d8b-adb7-a7d53d4265b0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x40016648c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000397b80 TLS:<nil>}
I1006 14:59:47.471116  864606 retry.go:31] will retry after 268.225903ms: Temporary Error: unexpected response code: 503
I1006 14:59:47.742467  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eef3c91a-0d6b-4322-a735-76f0a291b5aa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:47 GMT]] Body:0x40015fc800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002ee8c0 TLS:<nil>}
I1006 14:59:47.742546  864606 retry.go:31] will retry after 472.470966ms: Temporary Error: unexpected response code: 503
I1006 14:59:48.218303  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[04ab3e02-046d-4d1f-ad47-78d7e2f47ef8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:48 GMT]] Body:0x4001664940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002eea00 TLS:<nil>}
I1006 14:59:48.218396  864606 retry.go:31] will retry after 440.034519ms: Temporary Error: unexpected response code: 503
I1006 14:59:48.661908  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0de15ae4-6047-4d6d-994f-4614710daf5b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:48 GMT]] Body:0x40015fc980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002eeb40 TLS:<nil>}
I1006 14:59:48.661973  864606 retry.go:31] will retry after 992.160844ms: Temporary Error: unexpected response code: 503
I1006 14:59:49.658545  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d6b3978e-e8e7-4d60-af37-d7c6cb2e49a7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:49 GMT]] Body:0x40015fca40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002eec80 TLS:<nil>}
I1006 14:59:49.658606  864606 retry.go:31] will retry after 1.417021605s: Temporary Error: unexpected response code: 503
I1006 14:59:51.079745  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[03bb8c96-3087-429c-8a36-e0a76a77aa53] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:51 GMT]] Body:0x40015fcb00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002eedc0 TLS:<nil>}
I1006 14:59:51.079811  864606 retry.go:31] will retry after 1.52046081s: Temporary Error: unexpected response code: 503
I1006 14:59:52.606400  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[72d92ce6-7583-4530-a946-ed9aa0800b8d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:52 GMT]] Body:0x40015fcb80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001728000 TLS:<nil>}
I1006 14:59:52.606463  864606 retry.go:31] will retry after 1.664128932s: Temporary Error: unexpected response code: 503
I1006 14:59:54.275022  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a2c401a-9f22-4af4-9d6f-e789edd99254] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:54 GMT]] Body:0x4001664b40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001728140 TLS:<nil>}
I1006 14:59:54.275082  864606 retry.go:31] will retry after 4.649224182s: Temporary Error: unexpected response code: 503
I1006 14:59:58.928853  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[17183875-3018-4e6c-b6bb-47cfb49f941a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 14:59:58 GMT]] Body:0x40015fcc00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001728280 TLS:<nil>}
I1006 14:59:58.928920  864606 retry.go:31] will retry after 4.595348475s: Temporary Error: unexpected response code: 503
I1006 15:00:03.530253  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a81aeb72-f9ad-41e6-a028-985886260ea5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 15:00:03 GMT]] Body:0x40015fcc80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40017283c0 TLS:<nil>}
I1006 15:00:03.530363  864606 retry.go:31] will retry after 5.493250971s: Temporary Error: unexpected response code: 503
I1006 15:00:09.029269  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7bbf11c3-186b-43f1-9022-bcfc021b14a8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 15:00:09 GMT]] Body:0x40015fcd00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001728500 TLS:<nil>}
I1006 15:00:09.029329  864606 retry.go:31] will retry after 7.151293408s: Temporary Error: unexpected response code: 503
I1006 15:00:16.183932  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e678522e-b70d-4497-949b-e57df63f7932] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 15:00:16 GMT]] Body:0x4001664d80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40002eef00 TLS:<nil>}
I1006 15:00:16.183997  864606 retry.go:31] will retry after 26.701074708s: Temporary Error: unexpected response code: 503
I1006 15:00:42.888723  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dcbe3006-97df-4040-a051-e25251c32f9a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 15:00:42 GMT]] Body:0x4001664e40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001728640 TLS:<nil>}
I1006 15:00:42.888803  864606 retry.go:31] will retry after 38.020418874s: Temporary Error: unexpected response code: 503
I1006 15:01:20.912351  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4c2e12d0-8119-472c-afaf-0e52093d6eda] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 15:01:20 GMT]] Body:0x4001664f00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001728780 TLS:<nil>}
I1006 15:01:20.912421  864606 retry.go:31] will retry after 59.087135895s: Temporary Error: unexpected response code: 503
I1006 15:02:20.012611  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1ff28177-f2cd-43d2-aab8-7fa63ea22aa3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 15:02:20 GMT]] Body:0x4001664080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40017288c0 TLS:<nil>}
I1006 15:02:20.012693  864606 retry.go:31] will retry after 1m22.54759854s: Temporary Error: unexpected response code: 503
I1006 15:03:42.563444  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[81524da5-45e4-42d9-97ec-d885d173000b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 15:03:42 GMT]] Body:0x4001664140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001728a00 TLS:<nil>}
I1006 15:03:42.563512  864606 retry.go:31] will retry after 46.422169981s: Temporary Error: unexpected response code: 503
I1006 15:04:28.990812  864606 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[729d6346-7a82-46f4-b428-b82e8fbe048b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 06 Oct 2025 15:04:28 GMT]] Body:0x40015fc140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4001728b40 TLS:<nil>}
I1006 15:04:28.990893  864606 retry.go:31] will retry after 47.235846705s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-933184
helpers_test.go:243: (dbg) docker inspect functional-933184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501",
	        "Created": "2025-10-06T14:46:07.46263544Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 846873,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:46:07.527334264Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/hostname",
	        "HostsPath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/hosts",
	        "LogPath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501-json.log",
	        "Name": "/functional-933184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-933184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-933184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501",
	                "LowerDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8-init/diff:/var/lib/docker/overlay2/e377610d56c190eb4e6f5af0c002c2b677875f0d15e22ba07535ade05d2c2018/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-933184",
	                "Source": "/var/lib/docker/volumes/functional-933184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-933184",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-933184",
	                "name.minikube.sigs.k8s.io": "functional-933184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a4c74fefb54016d9db8a4692ad25d486a608673942af5fac2a3ceb965acb0bf5",
	            "SandboxKey": "/var/run/docker/netns/a4c74fefb540",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37516"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37517"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37520"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37518"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37519"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-933184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:6c:0f:8d:60:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4301fd96426a7648713373a63590dd66ad770b3f3e9d1c28d9ad21b65bbabb96",
	                    "EndpointID": "cd029ae863a08454ef7f268f8582fe95c68cbe2ca1d8e537bad3d56c5c93c68b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-933184",
	                        "5fffa5167caa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-933184 -n functional-933184
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-933184 logs -n 25: (1.215924441s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-933184 image save kicbase/echo-server:functional-933184 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 14:59 UTC │
	│ image          │ functional-933184 image rm kicbase/echo-server:functional-933184 --alsologtostderr                                                                          │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 14:59 UTC │
	│ image          │ functional-933184 image ls                                                                                                                                  │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 14:59 UTC │
	│ image          │ functional-933184 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 14:59 UTC │
	│ image          │ functional-933184 image ls                                                                                                                                  │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 14:59 UTC │
	│ image          │ functional-933184 image save --daemon kicbase/echo-server:functional-933184 --alsologtostderr                                                               │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 14:59 UTC │
	│ docker-env     │ functional-933184 docker-env                                                                                                                                │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 14:59 UTC │
	│ docker-env     │ functional-933184 docker-env                                                                                                                                │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 14:59 UTC │
	│ ssh            │ functional-933184 ssh sudo cat /etc/test/nested/copy/805351/hosts                                                                                           │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:59 UTC │ 06 Oct 25 15:00 UTC │
	│ ssh            │ functional-933184 ssh sudo cat /etc/ssl/certs/805351.pem                                                                                                    │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ ssh            │ functional-933184 ssh sudo cat /usr/share/ca-certificates/805351.pem                                                                                        │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ ssh            │ functional-933184 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                    │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ ssh            │ functional-933184 ssh sudo cat /etc/ssl/certs/8053512.pem                                                                                                   │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ ssh            │ functional-933184 ssh sudo cat /usr/share/ca-certificates/8053512.pem                                                                                       │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ ssh            │ functional-933184 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                    │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ image          │ functional-933184 image ls --format short --alsologtostderr                                                                                                 │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ image          │ functional-933184 image ls --format yaml --alsologtostderr                                                                                                  │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ ssh            │ functional-933184 ssh pgrep buildkitd                                                                                                                       │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │                     │
	│ image          │ functional-933184 image build -t localhost/my-image:functional-933184 testdata/build --alsologtostderr                                                      │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ image          │ functional-933184 image ls                                                                                                                                  │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ image          │ functional-933184 image ls --format json --alsologtostderr                                                                                                  │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ image          │ functional-933184 image ls --format table --alsologtostderr                                                                                                 │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ update-context │ functional-933184 update-context --alsologtostderr -v=2                                                                                                     │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ update-context │ functional-933184 update-context --alsologtostderr -v=2                                                                                                     │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	│ update-context │ functional-933184 update-context --alsologtostderr -v=2                                                                                                     │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 15:00 UTC │ 06 Oct 25 15:00 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:59:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:59:44.861829  864532 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:59:44.861950  864532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:59:44.861966  864532 out.go:374] Setting ErrFile to fd 2...
	I1006 14:59:44.861971  864532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:59:44.862246  864532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 14:59:44.862614  864532 out.go:368] Setting JSON to false
	I1006 14:59:44.863578  864532 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":78137,"bootTime":1759684648,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1006 14:59:44.863649  864532 start.go:140] virtualization:  
	I1006 14:59:44.867506  864532 out.go:179] * [functional-933184] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 14:59:44.871124  864532 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:59:44.871284  864532 notify.go:220] Checking for updates...
	I1006 14:59:44.878850  864532 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:59:44.883981  864532 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:59:44.887007  864532 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	I1006 14:59:44.889890  864532 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 14:59:44.892766  864532 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:59:44.896226  864532 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:59:44.896784  864532 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:59:44.928818  864532 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 14:59:44.929006  864532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:59:44.987376  864532 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 14:59:44.977738095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:59:44.987484  864532 docker.go:318] overlay module found
	I1006 14:59:44.990607  864532 out.go:179] * Using the docker driver based on existing profile
	I1006 14:59:44.993510  864532 start.go:304] selected driver: docker
	I1006 14:59:44.993536  864532 start.go:924] validating driver "docker" against &{Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:59:44.993661  864532 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:59:44.993772  864532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:59:45.152739  864532 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 14:59:45.139843428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:59:45.153178  864532 cni.go:84] Creating CNI manager for ""
	I1006 14:59:45.153256  864532 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:59:45.153314  864532 start.go:348] cluster config:
	{Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:59:45.158821  864532 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Oct 06 14:59:47 functional-933184 dockerd[6746]: time="2025-10-06T14:59:47.309855702Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 06 14:59:47 functional-933184 dockerd[6746]: time="2025-10-06T14:59:47.407640388Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:59:47 functional-933184 dockerd[6746]: time="2025-10-06T14:59:47.450697462Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 06 14:59:47 functional-933184 dockerd[6746]: time="2025-10-06T14:59:47.555082889Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:59:48 functional-933184 dockerd[6746]: time="2025-10-06T14:59:48.485864461Z" level=info msg="ignoring event" container=1c03a6abf46d25c5eccd927f9a6ef64ac0c1fdd332652133806b4630222fe452 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 15:00:01 functional-933184 dockerd[6746]: time="2025-10-06T15:00:01.387157857Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 06 15:00:02 functional-933184 dockerd[6746]: time="2025-10-06T15:00:02.651285980Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 15:00:04 functional-933184 dockerd[6746]: time="2025-10-06T15:00:04.344168374Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 06 15:00:04 functional-933184 dockerd[6746]: time="2025-10-06T15:00:04.434730881Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 15:00:13 functional-933184 dockerd[6746]: time="2025-10-06T15:00:13.502379705Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 15:00:28 functional-933184 dockerd[6746]: time="2025-10-06T15:00:28.346549155Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 06 15:00:28 functional-933184 dockerd[6746]: time="2025-10-06T15:00:28.431414506Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 15:00:33 functional-933184 dockerd[6746]: time="2025-10-06T15:00:33.325420900Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 06 15:00:33 functional-933184 dockerd[6746]: time="2025-10-06T15:00:33.420586147Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 15:00:34 functional-933184 cri-dockerd[7516]: time="2025-10-06T15:00:34Z" level=info msg="Stop pulling image kicbase/echo-server:latest: Status: Image is up to date for kicbase/echo-server:latest"
	Oct 06 15:01:11 functional-933184 dockerd[6746]: time="2025-10-06T15:01:11.326236372Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 06 15:01:11 functional-933184 dockerd[6746]: time="2025-10-06T15:01:11.514410917Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 15:01:11 functional-933184 cri-dockerd[7516]: time="2025-10-06T15:01:11Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Oct 06 15:01:22 functional-933184 dockerd[6746]: time="2025-10-06T15:01:22.340662853Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 06 15:01:22 functional-933184 dockerd[6746]: time="2025-10-06T15:01:22.427795124Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 15:02:38 functional-933184 dockerd[6746]: time="2025-10-06T15:02:38.341530781Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 06 15:02:38 functional-933184 dockerd[6746]: time="2025-10-06T15:02:38.443415174Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 15:02:53 functional-933184 dockerd[6746]: time="2025-10-06T15:02:53.334014733Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 06 15:02:53 functional-933184 dockerd[6746]: time="2025-10-06T15:02:53.558924606Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 15:02:53 functional-933184 cri-dockerd[7516]: time="2025-10-06T15:02:53Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	677b4e0630db2       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           4 minutes ago       Running             echo-server               0                   37b0bf96d0811       hello-node-connect-7d85dfc575-8vhg5         default
	7e4dd96710100       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   1c03a6abf46d2       busybox-mount                               default
	9222476e66295       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   f382d636608f6       hello-node-75c85bcc94-v749q                 default
	6d9ba03b8863c       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                         15 minutes ago      Running             nginx                     0                   b86fa3d466cfc       nginx-svc                                   default
	67fbd4a16067e       ba04bb24b9575                                                                                         15 minutes ago      Running             storage-provisioner       4                   154edd9f915d2       storage-provisioner                         kube-system
	2e9d6beb6e280       ba04bb24b9575                                                                                         15 minutes ago      Exited              storage-provisioner       3                   154edd9f915d2       storage-provisioner                         kube-system
	156725d59efd7       05baa95f5142d                                                                                         15 minutes ago      Running             kube-proxy                3                   21b4db5f04536       kube-proxy-zdgg7                            kube-system
	bd006eabbe87b       138784d87c9c5                                                                                         15 minutes ago      Running             coredns                   2                   a3bbd10247337       coredns-66bc5c9577-9mq5b                    kube-system
	6fcdf6f551c14       43911e833d64d                                                                                         15 minutes ago      Running             kube-apiserver            0                   89e3edb922f15       kube-apiserver-functional-933184            kube-system
	8e509ed52ab67       b5f57ec6b9867                                                                                         15 minutes ago      Running             kube-scheduler            3                   feb368f89cc4e       kube-scheduler-functional-933184            kube-system
	bcb39dc782d61       7eb2c6ff0c5a7                                                                                         15 minutes ago      Running             kube-controller-manager   3                   c226e4161bd80       kube-controller-manager-functional-933184   kube-system
	ab99eb78d7130       a1894772a478e                                                                                         15 minutes ago      Running             etcd                      2                   ecf4d7659f06c       etcd-functional-933184                      kube-system
	f0543848bada1       b5f57ec6b9867                                                                                         15 minutes ago      Exited              kube-scheduler            2                   47f1155aea2db       kube-scheduler-functional-933184            kube-system
	dbf089d9c53ce       05baa95f5142d                                                                                         15 minutes ago      Exited              kube-proxy                2                   d73913b2b29fc       kube-proxy-zdgg7                            kube-system
	550fc01c34458       7eb2c6ff0c5a7                                                                                         15 minutes ago      Exited              kube-controller-manager   2                   ec8bd41a3bb5b       kube-controller-manager-functional-933184   kube-system
	427dc6f962780       138784d87c9c5                                                                                         16 minutes ago      Exited              coredns                   1                   ac7201a1c2c4f       coredns-66bc5c9577-9mq5b                    kube-system
	402bdb9bee67e       a1894772a478e                                                                                         16 minutes ago      Exited              etcd                      1                   bf3ae8b955f98       etcd-functional-933184                      kube-system
	
	
	==> coredns [427dc6f96278] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38471 - 40348 "HINFO IN 1928811608007205393.1693971363683984954. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014901569s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bd006eabbe87] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43853 - 59172 "HINFO IN 6573570928990532390.5180121269576993269. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013433314s
	
	
	==> describe nodes <==
	Name:               functional-933184
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-933184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=functional-933184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T14_46_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 14:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-933184
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 15:04:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 15:00:26 +0000   Mon, 06 Oct 2025 14:46:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 15:00:26 +0000   Mon, 06 Oct 2025 14:46:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 15:00:26 +0000   Mon, 06 Oct 2025 14:46:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 15:00:26 +0000   Mon, 06 Oct 2025 14:46:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-933184
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9ab0fdbff02456391dde75296bb36e5
	  System UUID:                9a0c63bd-fa52-4df3-ab5b-d64d258d24eb
	  Boot ID:                    2fc2fcec-a145-448c-8b5d-9e614a6ff2df
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-v749q                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-7d85dfc575-8vhg5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-9mq5b                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     18m
	  kube-system                 etcd-functional-933184                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         18m
	  kube-system                 kube-apiserver-functional-933184              250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-933184     200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-zdgg7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-functional-933184              100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-pgh6s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2zsnh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node functional-933184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node functional-933184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node functional-933184 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m                kubelet          Node functional-933184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m                kubelet          Node functional-933184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m                kubelet          Node functional-933184 status is now: NodeHasSufficientPID
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           18m                node-controller  Node functional-933184 event: Registered Node functional-933184 in Controller
	  Normal   NodeReady                18m                kubelet          Node functional-933184 status is now: NodeReady
	  Normal   RegisteredNode           16m                node-controller  Node functional-933184 event: Registered Node functional-933184 in Controller
	  Warning  ContainerGCFailed        16m (x2 over 17m)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node functional-933184 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node functional-933184 status is now: NodeHasSufficientMemory
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node functional-933184 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                node-controller  Node functional-933184 event: Registered Node functional-933184 in Controller
	
	
	==> dmesg <==
	[Oct 6 12:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 13:11] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 14:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [402bdb9bee67] <==
	{"level":"warn","ts":"2025-10-06T14:47:57.087762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.109164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.128203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.162961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.174001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.196398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.300785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58190","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T14:48:37.808552Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-06T14:48:37.808609Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-933184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-06T14:48:37.808717Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T14:48:44.815049Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T14:48:44.815130Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T14:48:44.815150Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-06T14:48:44.815235Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-06T14:48:44.815247Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-06T14:48:44.817595Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T14:48:44.817718Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T14:48:44.817751Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-06T14:48:44.817963Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T14:48:44.818014Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T14:48:44.818047Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T14:48:44.822615Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-06T14:48:44.822718Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T14:48:44.822982Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-06T14:48:44.823005Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-933184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ab99eb78d713] <==
	{"level":"warn","ts":"2025-10-06T14:49:00.506841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.532156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.559195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.584785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.615586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.650951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.680894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.705590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.745295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.767876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.793778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.850829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.901664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.924749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.954731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.988428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:01.014783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:01.047809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:01.169500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45350","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T14:58:58.936061Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1136}
	{"level":"info","ts":"2025-10-06T14:58:58.959169Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1136,"took":"22.758864ms","hash":3482489076,"current-db-size-bytes":3203072,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1490944,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-06T14:58:58.959218Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3482489076,"revision":1136,"compact-revision":-1}
	{"level":"info","ts":"2025-10-06T15:03:58.942642Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1440}
	{"level":"info","ts":"2025-10-06T15:03:58.946477Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1440,"took":"3.290843ms","hash":1858620341,"current-db-size-bytes":3203072,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":2314240,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2025-10-06T15:03:58.946537Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1858620341,"revision":1440,"compact-revision":1136}
	
	
	==> kernel <==
	 15:04:46 up 21:47,  0 user,  load average: 0.74, 0.42, 0.73
	Linux functional-933184 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [6fcdf6f551c1] <==
	I1006 14:49:02.326090       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 14:49:02.326339       1 aggregator.go:171] initial CRD sync complete...
	I1006 14:49:02.326353       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 14:49:02.326360       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 14:49:02.326366       1 cache.go:39] Caches are synced for autoregister controller
	I1006 14:49:02.345527       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 14:49:02.365629       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1006 14:49:02.405179       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 14:49:02.916474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1006 14:49:03.329580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1006 14:49:03.331065       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 14:49:03.337228       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 14:49:03.951315       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 14:49:03.989764       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1006 14:49:04.028675       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 14:49:04.038814       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 14:49:05.810549       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 14:49:19.891278       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.228.43"}
	I1006 14:49:27.023450       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.166.75"}
	I1006 14:49:36.614963       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.97.222"}
	I1006 14:53:36.527959       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.124.112"}
	I1006 14:59:02.265017       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 14:59:46.405301       1 controller.go:667] quota admission added evaluator for: namespaces
	I1006 14:59:46.789998       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.166.135"}
	I1006 14:59:46.832342       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.192.199"}
	
	
	==> kube-controller-manager [550fc01c3445] <==
	
	
	==> kube-controller-manager [bcb39dc782d6] <==
	I1006 14:49:05.496597       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 14:49:05.499335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1006 14:49:05.501471       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1006 14:49:05.501811       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 14:49:05.502067       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 14:49:05.502221       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 14:49:05.502441       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1006 14:49:05.504112       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1006 14:49:05.504431       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 14:49:05.504694       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1006 14:49:05.504891       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1006 14:49:05.505339       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1006 14:49:05.511446       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1006 14:49:05.514540       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 14:49:05.518161       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1006 14:59:46.527795       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1006 14:59:46.539296       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1006 14:59:46.565688       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1006 14:59:46.567514       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1006 14:59:46.577007       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1006 14:59:46.577767       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1006 14:59:46.593377       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1006 14:59:46.601041       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1006 14:59:46.604371       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1006 14:59:46.622230       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [156725d59efd] <==
	I1006 14:49:03.079329       1 server_linux.go:53] "Using iptables proxy"
	I1006 14:49:03.275374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 14:49:03.380514       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 14:49:03.380553       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1006 14:49:03.380667       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 14:49:03.400464       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 14:49:03.400701       1 server_linux.go:132] "Using iptables Proxier"
	I1006 14:49:03.404946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 14:49:03.405244       1 server.go:527] "Version info" version="v1.34.1"
	I1006 14:49:03.405267       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:49:03.406398       1 config.go:200] "Starting service config controller"
	I1006 14:49:03.406417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 14:49:03.415411       1 config.go:309] "Starting node config controller"
	I1006 14:49:03.415431       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 14:49:03.415440       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 14:49:03.415867       1 config.go:106] "Starting endpoint slice config controller"
	I1006 14:49:03.415885       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 14:49:03.415900       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 14:49:03.415904       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 14:49:03.506935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 14:49:03.516366       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 14:49:03.516382       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dbf089d9c53c] <==
	
	
	==> kube-scheduler [8e509ed52ab6] <==
	I1006 14:49:01.155166       1 serving.go:386] Generated self-signed cert in-memory
	I1006 14:49:02.830831       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 14:49:02.830869       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:49:02.837907       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 14:49:02.837965       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 14:49:02.838019       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:49:02.838034       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:49:02.838050       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 14:49:02.838062       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 14:49:02.838500       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 14:49:02.838651       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 14:49:02.938942       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 14:49:02.939304       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 14:49:02.939325       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f0543848bada] <==
	
	
	==> kubelet <==
	Oct 06 15:03:03 functional-933184 kubelet[9144]: E1006 15:03:03.282930    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 15:03:05 functional-933184 kubelet[9144]: E1006 15:03:05.286634    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2zsnh" podUID="97268b55-4c1b-484d-88f1-292d3097925c"
	Oct 06 15:03:05 functional-933184 kubelet[9144]: E1006 15:03:05.286991    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pgh6s" podUID="2ba980ac-8722-4a30-9589-6a1aa308a633"
	Oct 06 15:03:15 functional-933184 kubelet[9144]: E1006 15:03:15.282492    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 15:03:18 functional-933184 kubelet[9144]: E1006 15:03:18.284826    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pgh6s" podUID="2ba980ac-8722-4a30-9589-6a1aa308a633"
	Oct 06 15:03:18 functional-933184 kubelet[9144]: E1006 15:03:18.292096    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2zsnh" podUID="97268b55-4c1b-484d-88f1-292d3097925c"
	Oct 06 15:03:28 functional-933184 kubelet[9144]: E1006 15:03:28.283110    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 15:03:29 functional-933184 kubelet[9144]: E1006 15:03:29.285219    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pgh6s" podUID="2ba980ac-8722-4a30-9589-6a1aa308a633"
	Oct 06 15:03:31 functional-933184 kubelet[9144]: E1006 15:03:31.284499    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2zsnh" podUID="97268b55-4c1b-484d-88f1-292d3097925c"
	Oct 06 15:03:42 functional-933184 kubelet[9144]: E1006 15:03:42.284194    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 15:03:43 functional-933184 kubelet[9144]: E1006 15:03:43.284506    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pgh6s" podUID="2ba980ac-8722-4a30-9589-6a1aa308a633"
	Oct 06 15:03:46 functional-933184 kubelet[9144]: E1006 15:03:46.286570    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2zsnh" podUID="97268b55-4c1b-484d-88f1-292d3097925c"
	Oct 06 15:03:54 functional-933184 kubelet[9144]: E1006 15:03:54.284464    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pgh6s" podUID="2ba980ac-8722-4a30-9589-6a1aa308a633"
	Oct 06 15:03:56 functional-933184 kubelet[9144]: E1006 15:03:56.282663    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 15:03:58 functional-933184 kubelet[9144]: E1006 15:03:58.285983    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2zsnh" podUID="97268b55-4c1b-484d-88f1-292d3097925c"
	Oct 06 15:04:07 functional-933184 kubelet[9144]: E1006 15:04:07.285159    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pgh6s" podUID="2ba980ac-8722-4a30-9589-6a1aa308a633"
	Oct 06 15:04:08 functional-933184 kubelet[9144]: E1006 15:04:08.283148    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 15:04:12 functional-933184 kubelet[9144]: E1006 15:04:12.288923    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2zsnh" podUID="97268b55-4c1b-484d-88f1-292d3097925c"
	Oct 06 15:04:20 functional-933184 kubelet[9144]: E1006 15:04:20.285378    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pgh6s" podUID="2ba980ac-8722-4a30-9589-6a1aa308a633"
	Oct 06 15:04:23 functional-933184 kubelet[9144]: E1006 15:04:23.282902    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 15:04:24 functional-933184 kubelet[9144]: E1006 15:04:24.285676    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2zsnh" podUID="97268b55-4c1b-484d-88f1-292d3097925c"
	Oct 06 15:04:35 functional-933184 kubelet[9144]: E1006 15:04:35.284799    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pgh6s" podUID="2ba980ac-8722-4a30-9589-6a1aa308a633"
	Oct 06 15:04:38 functional-933184 kubelet[9144]: E1006 15:04:38.283157    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 15:04:39 functional-933184 kubelet[9144]: E1006 15:04:39.284773    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2zsnh" podUID="97268b55-4c1b-484d-88f1-292d3097925c"
	Oct 06 15:04:46 functional-933184 kubelet[9144]: E1006 15:04:46.286546    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pgh6s" podUID="2ba980ac-8722-4a30-9589-6a1aa308a633"
	
	
	==> storage-provisioner [2e9d6beb6e28] <==
	I1006 14:49:02.934187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 14:49:02.938570       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [67fbd4a16067] <==
	W1006 15:04:21.276103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:23.279553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:23.286755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:25.289493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:25.294127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:27.297489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:27.304531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:29.307252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:29.311976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:31.315598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:31.322962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:33.326779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:33.331463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:35.335150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:35.340046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:37.343336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:37.348166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:39.353641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:39.361552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:41.364913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:41.369642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:43.372374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:43.377729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:45.388931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 15:04:45.399026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-933184 -n functional-933184
helpers_test.go:269: (dbg) Run:  kubectl --context functional-933184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-pgh6s kubernetes-dashboard-855c9754f9-2zsnh
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-933184 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-pgh6s kubernetes-dashboard-855c9754f9-2zsnh
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-933184 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-pgh6s kubernetes-dashboard-855c9754f9-2zsnh: exit status 1 (108.289368ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933184/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:59:43 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://7e4dd967101000dd167ffc5704913a9a50f08eb001a4ef87d27671b3af0993db
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 06 Oct 2025 14:59:46 +0000
	      Finished:     Mon, 06 Oct 2025 14:59:46 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zkp65 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zkp65:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m4s  default-scheduler  Successfully assigned default/busybox-mount to functional-933184
	  Normal  Pulling    5m4s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m1s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.413s (2.413s including waiting). Image size: 3547125 bytes.
	  Normal  Created    5m1s  kubelet            Created container: mount-munger
	  Normal  Started    5m1s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933184/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:49:33 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4dbq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-p4dbq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/sp-pod to functional-933184
	  Warning  Failed     15m                kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x4 over 15m)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x66 over 15m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     9s (x66 over 15m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-pgh6s" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-2zsnh" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-933184 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-pgh6s kubernetes-dashboard-855c9754f9-2zsnh: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-933184 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-933184 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-8vhg5" [afbbfa0f-3a47-4314-8241-153b7c527e2f] Pending
helpers_test.go:352: "hello-node-connect-7d85dfc575-8vhg5" [afbbfa0f-3a47-4314-8241-153b7c527e2f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1006 14:49:49.875238  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:51:11.797121  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:53:27.932602  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-933184 -n functional-933184
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-06 14:59:36.99756022 +0000 UTC m=+2355.795428991
functional_test.go:1645: (dbg) Run:  kubectl --context functional-933184 describe po hello-node-connect-7d85dfc575-8vhg5 -n default
functional_test.go:1645: (dbg) kubectl --context functional-933184 describe po hello-node-connect-7d85dfc575-8vhg5 -n default:
Name:             hello-node-connect-7d85dfc575-8vhg5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-933184/192.168.49.2
Start Time:       Mon, 06 Oct 2025 14:49:36 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8hlxw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8hlxw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-8vhg5 to functional-933184
Warning  Failed     8m29s                 kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    6m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m57s (x4 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m57s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-933184 logs hello-node-connect-7d85dfc575-8vhg5 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-933184 logs hello-node-connect-7d85dfc575-8vhg5 -n default: exit status 1 (107.075202ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-8vhg5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-933184 logs hello-node-connect-7d85dfc575-8vhg5 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-933184 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-8vhg5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-933184/192.168.49.2
Start Time:       Mon, 06 Oct 2025 14:49:36 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8hlxw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8hlxw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-8vhg5 to functional-933184
Warning  Failed     8m29s                 kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    6m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m57s (x4 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m57s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-933184 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-933184 logs -l app=hello-node-connect: exit status 1 (84.405867ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-8vhg5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-933184 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-933184 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.97.222
IPs:                      10.98.97.222
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30964/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-933184
helpers_test.go:243: (dbg) docker inspect functional-933184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501",
	        "Created": "2025-10-06T14:46:07.46263544Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 846873,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:46:07.527334264Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/hostname",
	        "HostsPath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/hosts",
	        "LogPath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501-json.log",
	        "Name": "/functional-933184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-933184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-933184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501",
	                "LowerDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8-init/diff:/var/lib/docker/overlay2/e377610d56c190eb4e6f5af0c002c2b677875f0d15e22ba07535ade05d2c2018/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-933184",
	                "Source": "/var/lib/docker/volumes/functional-933184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-933184",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-933184",
	                "name.minikube.sigs.k8s.io": "functional-933184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a4c74fefb54016d9db8a4692ad25d486a608673942af5fac2a3ceb965acb0bf5",
	            "SandboxKey": "/var/run/docker/netns/a4c74fefb540",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37516"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37517"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37520"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37518"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37519"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-933184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:6c:0f:8d:60:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4301fd96426a7648713373a63590dd66ad770b3f3e9d1c28d9ad21b65bbabb96",
	                    "EndpointID": "cd029ae863a08454ef7f268f8582fe95c68cbe2ca1d8e537bad3d56c5c93c68b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-933184",
	                        "5fffa5167caa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-933184 -n functional-933184
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-933184 logs -n 25: (1.228152499s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-933184 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                  │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │ 06 Oct 25 14:48 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │ 06 Oct 25 14:48 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │ 06 Oct 25 14:48 UTC │
	│ kubectl │ functional-933184 kubectl -- --context functional-933184 get pods                                                        │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │ 06 Oct 25 14:48 UTC │
	│ start   │ -p functional-933184 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                 │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │ 06 Oct 25 14:49 UTC │
	│ service │ invalid-svc -p functional-933184                                                                                         │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ cp      │ functional-933184 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                       │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ config  │ functional-933184 config unset cpus                                                                                      │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ config  │ functional-933184 config get cpus                                                                                        │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ config  │ functional-933184 config set cpus 2                                                                                      │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ config  │ functional-933184 config get cpus                                                                                        │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ ssh     │ functional-933184 ssh -n functional-933184 sudo cat /home/docker/cp-test.txt                                             │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ config  │ functional-933184 config unset cpus                                                                                      │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ config  │ functional-933184 config get cpus                                                                                        │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ ssh     │ functional-933184 ssh echo hello                                                                                         │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ cp      │ functional-933184 cp functional-933184:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd11068724/001/cp-test.txt │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ ssh     │ functional-933184 ssh cat /etc/hostname                                                                                  │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ ssh     │ functional-933184 ssh -n functional-933184 sudo cat /home/docker/cp-test.txt                                             │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ tunnel  │ functional-933184 tunnel --alsologtostderr                                                                               │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ tunnel  │ functional-933184 tunnel --alsologtostderr                                                                               │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ cp      │ functional-933184 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ ssh     │ functional-933184 ssh -n functional-933184 sudo cat /tmp/does/not/exist/cp-test.txt                                      │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ tunnel  │ functional-933184 tunnel --alsologtostderr                                                                               │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ addons  │ functional-933184 addons list                                                                                            │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ addons  │ functional-933184 addons list -o json                                                                                    │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:48:18
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:48:18.880198  854070 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:48:18.889828  854070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:48:18.889883  854070 out.go:374] Setting ErrFile to fd 2...
	I1006 14:48:18.889892  854070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:48:18.890285  854070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 14:48:18.890787  854070 out.go:368] Setting JSON to false
	I1006 14:48:18.892193  854070 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":77451,"bootTime":1759684648,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1006 14:48:18.892298  854070 start.go:140] virtualization:  
	I1006 14:48:18.895789  854070 out.go:179] * [functional-933184] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 14:48:18.899573  854070 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:48:18.899646  854070 notify.go:220] Checking for updates...
	I1006 14:48:18.905375  854070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:48:18.908334  854070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:48:18.911406  854070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	I1006 14:48:18.914411  854070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 14:48:18.917407  854070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:48:18.920885  854070 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:48:18.921001  854070 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:48:18.947463  854070 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 14:48:18.947580  854070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:48:19.017673  854070 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-06 14:48:19.006342719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:48:19.017765  854070 docker.go:318] overlay module found
	I1006 14:48:19.020757  854070 out.go:179] * Using the docker driver based on existing profile
	I1006 14:48:19.023622  854070 start.go:304] selected driver: docker
	I1006 14:48:19.023631  854070 start.go:924] validating driver "docker" against &{Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:48:19.023806  854070 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:48:19.023920  854070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:48:19.087029  854070 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-06 14:48:19.076298504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:48:19.087544  854070 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:48:19.087572  854070 cni.go:84] Creating CNI manager for ""
	I1006 14:48:19.087638  854070 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:48:19.087771  854070 start.go:348] cluster config:
	{Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:48:19.090997  854070 out.go:179] * Starting "functional-933184" primary control-plane node in "functional-933184" cluster
	I1006 14:48:19.093837  854070 cache.go:123] Beginning downloading kic base image for docker with docker
	I1006 14:48:19.096874  854070 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:48:19.099815  854070 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:48:19.099862  854070 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1006 14:48:19.099869  854070 cache.go:58] Caching tarball of preloaded images
	I1006 14:48:19.099960  854070 preload.go:233] Found /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1006 14:48:19.099969  854070 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1006 14:48:19.100080  854070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/config.json ...
	I1006 14:48:19.100309  854070 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:48:19.125246  854070 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:48:19.125256  854070 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:48:19.125283  854070 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:48:19.125311  854070 start.go:360] acquireMachinesLock for functional-933184: {Name:mkca21d6f937ff7127d821d61e44cf2e04756079 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:48:19.125377  854070 start.go:364] duration metric: took 49.82µs to acquireMachinesLock for "functional-933184"
	I1006 14:48:19.125394  854070 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:48:19.125403  854070 fix.go:54] fixHost starting: 
	I1006 14:48:19.125674  854070 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
	I1006 14:48:19.144035  854070 fix.go:112] recreateIfNeeded on functional-933184: state=Running err=<nil>
	W1006 14:48:19.144065  854070 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:48:19.147347  854070 out.go:252] * Updating the running docker "functional-933184" container ...
	I1006 14:48:19.147374  854070 machine.go:93] provisionDockerMachine start ...
	I1006 14:48:19.147456  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:19.164990  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:19.165309  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:19.165316  854070 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:48:19.303353  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-933184
	
	I1006 14:48:19.303367  854070 ubuntu.go:182] provisioning hostname "functional-933184"
	I1006 14:48:19.303439  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:19.324077  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:19.324406  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:19.324415  854070 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-933184 && echo "functional-933184" | sudo tee /etc/hostname
	I1006 14:48:19.469521  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-933184
	
	I1006 14:48:19.469596  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:19.496302  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:19.496607  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:19.496628  854070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-933184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-933184/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-933184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:48:19.632163  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:48:19.632178  854070 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-803497/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-803497/.minikube}
	I1006 14:48:19.632194  854070 ubuntu.go:190] setting up certificates
	I1006 14:48:19.632203  854070 provision.go:84] configureAuth start
	I1006 14:48:19.632263  854070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-933184
	I1006 14:48:19.649796  854070 provision.go:143] copyHostCerts
	I1006 14:48:19.649853  854070 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-803497/.minikube/ca.pem, removing ...
	I1006 14:48:19.649868  854070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-803497/.minikube/ca.pem
	I1006 14:48:19.649944  854070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/ca.pem (1082 bytes)
	I1006 14:48:19.650054  854070 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-803497/.minikube/cert.pem, removing ...
	I1006 14:48:19.650058  854070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-803497/.minikube/cert.pem
	I1006 14:48:19.650083  854070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/cert.pem (1123 bytes)
	I1006 14:48:19.650144  854070 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-803497/.minikube/key.pem, removing ...
	I1006 14:48:19.650147  854070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-803497/.minikube/key.pem
	I1006 14:48:19.650168  854070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/key.pem (1675 bytes)
	I1006 14:48:19.650216  854070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem org=jenkins.functional-933184 san=[127.0.0.1 192.168.49.2 functional-933184 localhost minikube]
	I1006 14:48:19.777745  854070 provision.go:177] copyRemoteCerts
	I1006 14:48:19.777796  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:48:19.777845  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:19.796089  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:48:19.898424  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:48:19.919997  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:48:19.938457  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:48:19.956564  854070 provision.go:87] duration metric: took 324.347297ms to configureAuth
	I1006 14:48:19.956580  854070 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:48:19.956778  854070 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:48:19.956827  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:19.974250  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:19.974609  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:19.974617  854070 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1006 14:48:20.125001  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1006 14:48:20.125013  854070 ubuntu.go:71] root file system type: overlay
	I1006 14:48:20.125168  854070 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1006 14:48:20.125241  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.144858  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:20.145159  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:20.145234  854070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1006 14:48:20.289727  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1006 14:48:20.289816  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.309957  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:20.310249  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:20.310264  854070 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1006 14:48:20.456427  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:48:20.456440  854070 machine.go:96] duration metric: took 1.309058805s to provisionDockerMachine
	I1006 14:48:20.456449  854070 start.go:293] postStartSetup for "functional-933184" (driver="docker")
	I1006 14:48:20.456458  854070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:48:20.456541  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:48:20.456580  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.474077  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:48:20.571781  854070 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:48:20.575141  854070 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:48:20.575160  854070 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:48:20.575170  854070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/addons for local assets ...
	I1006 14:48:20.575232  854070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/files for local assets ...
	I1006 14:48:20.575312  854070 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/ssl/certs/8053512.pem -> 8053512.pem in /etc/ssl/certs
	I1006 14:48:20.575389  854070 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/test/nested/copy/805351/hosts -> hosts in /etc/test/nested/copy/805351
	I1006 14:48:20.575439  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/805351
	I1006 14:48:20.583255  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/ssl/certs/8053512.pem --> /etc/ssl/certs/8053512.pem (1708 bytes)
	I1006 14:48:20.601564  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/test/nested/copy/805351/hosts --> /etc/test/nested/copy/805351/hosts (40 bytes)
	I1006 14:48:20.619648  854070 start.go:296] duration metric: took 163.185604ms for postStartSetup
	I1006 14:48:20.619764  854070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:48:20.619802  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.636974  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:48:20.729447  854070 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:48:20.734813  854070 fix.go:56] duration metric: took 1.609406865s for fixHost
	I1006 14:48:20.734828  854070 start.go:83] releasing machines lock for "functional-933184", held for 1.609444328s
	I1006 14:48:20.734896  854070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-933184
	I1006 14:48:20.751967  854070 ssh_runner.go:195] Run: cat /version.json
	I1006 14:48:20.752025  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.752308  854070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:48:20.752360  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.781812  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:48:20.792778  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:48:20.888346  854070 ssh_runner.go:195] Run: systemctl --version
	I1006 14:48:21.032661  854070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:48:21.037194  854070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:48:21.037256  854070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:48:21.045539  854070 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:48:21.045556  854070 start.go:495] detecting cgroup driver to use...
	I1006 14:48:21.045589  854070 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:48:21.045692  854070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:48:21.060738  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1006 14:48:21.070442  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1006 14:48:21.079988  854070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1006 14:48:21.080062  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1006 14:48:21.089988  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:48:21.099219  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1006 14:48:21.108899  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:48:21.118826  854070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:48:21.127804  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1006 14:48:21.137959  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1006 14:48:21.147314  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1006 14:48:21.161424  854070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:48:21.169648  854070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:48:21.177481  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:21.321063  854070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1006 14:48:21.566626  854070 start.go:495] detecting cgroup driver to use...
	I1006 14:48:21.566677  854070 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:48:21.566734  854070 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1006 14:48:21.591258  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:48:21.605453  854070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:48:21.643429  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:48:21.658840  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1006 14:48:21.674415  854070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:48:21.691411  854070 ssh_runner.go:195] Run: which cri-dockerd
	I1006 14:48:21.695512  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1006 14:48:21.703524  854070 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1006 14:48:21.717517  854070 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1006 14:48:21.865866  854070 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1006 14:48:22.020236  854070 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1006 14:48:22.020319  854070 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1006 14:48:22.037261  854070 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1006 14:48:22.050632  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:22.216231  854070 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1006 14:48:48.121717  854070 ssh_runner.go:235] Completed: sudo systemctl restart docker: (25.905463969s)
	I1006 14:48:48.121779  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:48:48.138376  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1006 14:48:48.160747  854070 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1006 14:48:48.191436  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:48:48.204419  854070 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1006 14:48:48.331884  854070 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1006 14:48:48.447368  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:48.574130  854070 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1006 14:48:48.589966  854070 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1006 14:48:48.603625  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:48.731994  854070 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1006 14:48:48.831628  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:48:48.845580  854070 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1006 14:48:48.845636  854070 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1006 14:48:48.849656  854070 start.go:563] Will wait 60s for crictl version
	I1006 14:48:48.849712  854070 ssh_runner.go:195] Run: which crictl
	I1006 14:48:48.854548  854070 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:48:48.879534  854070 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1006 14:48:48.879591  854070 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:48:48.903115  854070 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:48:48.929131  854070 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1006 14:48:48.929231  854070 cli_runner.go:164] Run: docker network inspect functional-933184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:48:48.946082  854070 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:48:48.953512  854070 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1006 14:48:48.956232  854070 kubeadm.go:883] updating cluster {Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:48:48.956349  854070 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:48:48.956419  854070 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:48:48.975480  854070 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-933184
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1006 14:48:48.975493  854070 docker.go:621] Images already preloaded, skipping extraction
	I1006 14:48:48.975556  854070 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:48:48.995939  854070 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-933184
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1006 14:48:48.995953  854070 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:48:48.995961  854070 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 docker true true} ...
	I1006 14:48:48.996064  854070 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-933184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:48:48.996130  854070 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1006 14:48:49.054160  854070 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1006 14:48:49.054181  854070 cni.go:84] Creating CNI manager for ""
	I1006 14:48:49.054201  854070 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:48:49.054211  854070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:48:49.054239  854070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-933184 NodeName:functional-933184 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:
map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:48:49.054358  854070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-933184"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:48:49.054420  854070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:48:49.062530  854070 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:48:49.062589  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:48:49.070333  854070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1006 14:48:49.083561  854070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:48:49.096903  854070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I1006 14:48:49.110517  854070 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:48:49.114658  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:49.246029  854070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:48:49.267129  854070 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184 for IP: 192.168.49.2
	I1006 14:48:49.267139  854070 certs.go:195] generating shared ca certs ...
	I1006 14:48:49.267154  854070 certs.go:227] acquiring lock for ca certs: {Name:mk78547ccc35462965e66385811a001935f7f131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:48:49.267300  854070 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key
	I1006 14:48:49.267340  854070 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key
	I1006 14:48:49.267346  854070 certs.go:257] generating profile certs ...
	I1006 14:48:49.267432  854070 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.key
	I1006 14:48:49.267478  854070 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/apiserver.key.4a9bd7a8
	I1006 14:48:49.267511  854070 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/proxy-client.key
	I1006 14:48:49.267634  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/805351.pem (1338 bytes)
	W1006 14:48:49.267674  854070 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-803497/.minikube/certs/805351_empty.pem, impossibly tiny 0 bytes
	I1006 14:48:49.267682  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:48:49.267711  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:48:49.267734  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:48:49.267753  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem (1675 bytes)
	I1006 14:48:49.267805  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/ssl/certs/8053512.pem (1708 bytes)
	I1006 14:48:49.268391  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:48:49.297040  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 14:48:49.325986  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:48:49.352372  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 14:48:49.380471  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:48:49.400124  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:48:49.430696  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:48:49.474753  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:48:49.514435  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/ssl/certs/8053512.pem --> /usr/share/ca-certificates/8053512.pem (1708 bytes)
	I1006 14:48:49.551654  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:48:49.618406  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/certs/805351.pem --> /usr/share/ca-certificates/805351.pem (1338 bytes)
	I1006 14:48:49.654754  854070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:48:49.671554  854070 ssh_runner.go:195] Run: openssl version
	I1006 14:48:49.679304  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:48:49.698649  854070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:48:49.707998  854070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 14:21 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:48:49.708053  854070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:48:49.769857  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:48:49.782522  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/805351.pem && ln -fs /usr/share/ca-certificates/805351.pem /etc/ssl/certs/805351.pem"
	I1006 14:48:49.794120  854070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/805351.pem
	I1006 14:48:49.800677  854070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:46 /usr/share/ca-certificates/805351.pem
	I1006 14:48:49.800748  854070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/805351.pem
	I1006 14:48:49.862677  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/805351.pem /etc/ssl/certs/51391683.0"
	I1006 14:48:49.875313  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8053512.pem && ln -fs /usr/share/ca-certificates/8053512.pem /etc/ssl/certs/8053512.pem"
	I1006 14:48:49.886726  854070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8053512.pem
	I1006 14:48:49.893952  854070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:46 /usr/share/ca-certificates/8053512.pem
	I1006 14:48:49.894019  854070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8053512.pem
	I1006 14:48:49.976687  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8053512.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:48:49.994312  854070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:48:50.005609  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:48:50.092712  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:48:50.195174  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:48:50.300346  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:48:50.378562  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:48:50.446051  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:48:50.590291  854070 kubeadm.go:400] StartCluster: {Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:48:50.590445  854070 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1006 14:48:50.690795  854070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:48:50.706671  854070 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:48:50.706694  854070 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:48:50.706742  854070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:48:50.718678  854070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:48:50.719269  854070 kubeconfig.go:125] found "functional-933184" server: "https://192.168.49.2:8441"
	I1006 14:48:50.720993  854070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:48:50.733066  854070 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-06 14:46:18.452668782 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-06 14:48:49.105743754 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1006 14:48:50.733085  854070 kubeadm.go:1160] stopping kube-system containers ...
	I1006 14:48:50.733157  854070 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1006 14:48:50.781735  854070 docker.go:484] Stopping containers: [f0543848bada dbf089d9c53c 550fc01c3445 5330a4986510 47f1155aea2d 2da3d4d0b60c d73913b2b29f da4175c7d7ad c0787bb0e8f3 ec8bd41a3bb5 cda1ee00f9c1 0cc858d04b2e 427dc6f96278 ac7201a1c2c4 111e36f4b9fd eeb21d45d960 c3d511b79b6c 2f62b1a3dcbf 402bdb9bee67 36f0b465533f 46d7a9bf558b 7e3787177fc4 bf3ae8b955f9 d6dca607e1d2 e72d62df25e7 e36c72ae63db 84869ced6e0e 101d181170e5 f28b9b3c458a 30fc03a7a185 980a2bb4b1ff]
	I1006 14:48:50.781824  854070 ssh_runner.go:195] Run: docker stop f0543848bada dbf089d9c53c 550fc01c3445 5330a4986510 47f1155aea2d 2da3d4d0b60c d73913b2b29f da4175c7d7ad c0787bb0e8f3 ec8bd41a3bb5 cda1ee00f9c1 0cc858d04b2e 427dc6f96278 ac7201a1c2c4 111e36f4b9fd eeb21d45d960 c3d511b79b6c 2f62b1a3dcbf 402bdb9bee67 36f0b465533f 46d7a9bf558b 7e3787177fc4 bf3ae8b955f9 d6dca607e1d2 e72d62df25e7 e36c72ae63db 84869ced6e0e 101d181170e5 f28b9b3c458a 30fc03a7a185 980a2bb4b1ff
	I1006 14:48:52.674535  854070 ssh_runner.go:235] Completed: docker stop f0543848bada dbf089d9c53c 550fc01c3445 5330a4986510 47f1155aea2d 2da3d4d0b60c d73913b2b29f da4175c7d7ad c0787bb0e8f3 ec8bd41a3bb5 cda1ee00f9c1 0cc858d04b2e 427dc6f96278 ac7201a1c2c4 111e36f4b9fd eeb21d45d960 c3d511b79b6c 2f62b1a3dcbf 402bdb9bee67 36f0b465533f 46d7a9bf558b 7e3787177fc4 bf3ae8b955f9 d6dca607e1d2 e72d62df25e7 e36c72ae63db 84869ced6e0e 101d181170e5 f28b9b3c458a 30fc03a7a185 980a2bb4b1ff: (1.892654734s)
	I1006 14:48:52.674604  854070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 14:48:52.798255  854070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.823854  854070 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  6 14:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  6 14:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  6 14:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  6 14:46 /etc/kubernetes/scheduler.conf
	
	I1006 14:48:52.823913  854070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:48:52.844906  854070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.871472  854070 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:48:52.871553  854070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.892475  854070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.904615  854070 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:48:52.904682  854070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.918458  854070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.937148  854070 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:48:52.937219  854070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.956287  854070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:48:52.968354  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:48:53.024185  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:48:55.876937  854070 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.852727077s)
	I1006 14:48:55.876995  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:48:56.110402  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:48:56.177562  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:48:56.252607  854070 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:48:56.252678  854070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:48:56.753096  854070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:48:57.252778  854070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:48:57.752801  854070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:48:57.774902  854070 api_server.go:72] duration metric: took 1.522302209s to wait for apiserver process to appear ...
	I1006 14:48:57.774916  854070 api_server.go:88] waiting for apiserver healthz status ...
	I1006 14:48:57.774938  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:02.047293  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 14:49:02.047318  854070 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 14:49:02.047330  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:02.159621  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 14:49:02.159638  854070 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 14:49:02.275966  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:02.318215  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 14:49:02.318236  854070 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 14:49:02.775888  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:02.786691  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 14:49:02.786708  854070 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 14:49:03.275979  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:03.289549  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 14:49:03.289564  854070 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 14:49:03.775159  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:03.783520  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1006 14:49:03.797411  854070 api_server.go:141] control plane version: v1.34.1
	I1006 14:49:03.797427  854070 api_server.go:131] duration metric: took 6.022506122s to wait for apiserver health ...
	I1006 14:49:03.797435  854070 cni.go:84] Creating CNI manager for ""
	I1006 14:49:03.797445  854070 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:49:03.801089  854070 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1006 14:49:03.804103  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1006 14:49:03.812537  854070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1006 14:49:03.826445  854070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 14:49:03.831717  854070 system_pods.go:59] 7 kube-system pods found
	I1006 14:49:03.831744  854070 system_pods.go:61] "coredns-66bc5c9577-9mq5b" [3f6636b3-0de0-4de3-93dd-f948f6c444a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:49:03.831753  854070 system_pods.go:61] "etcd-functional-933184" [683200d7-2d0c-43e1-91bb-476782967ca9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 14:49:03.831765  854070 system_pods.go:61] "kube-apiserver-functional-933184" [1b90d7e7-cce6-416a-9bc7-ce9370628c70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 14:49:03.831771  854070 system_pods.go:61] "kube-controller-manager-functional-933184" [8e62c2c1-bc9a-42b3-be3a-5bbdede385be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 14:49:03.831777  854070 system_pods.go:61] "kube-proxy-zdgg7" [83956645-5857-4429-94f0-9c15888aef56] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1006 14:49:03.831787  854070 system_pods.go:61] "kube-scheduler-functional-933184" [116b89dc-6a80-489b-b959-c92899fb5d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 14:49:03.831792  854070 system_pods.go:61] "storage-provisioner" [85b5a712-1fa6-4db4-819a-74cb978e3ced] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 14:49:03.831800  854070 system_pods.go:74] duration metric: took 5.344758ms to wait for pod list to return data ...
	I1006 14:49:03.831807  854070 node_conditions.go:102] verifying NodePressure condition ...
	I1006 14:49:03.838718  854070 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 14:49:03.838738  854070 node_conditions.go:123] node cpu capacity is 2
	I1006 14:49:03.838749  854070 node_conditions.go:105] duration metric: took 6.938195ms to run NodePressure ...
	I1006 14:49:03.838808  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:49:04.101294  854070 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1006 14:49:04.106152  854070 kubeadm.go:743] kubelet initialised
	I1006 14:49:04.106162  854070 kubeadm.go:744] duration metric: took 4.856201ms waiting for restarted kubelet to initialise ...
	I1006 14:49:04.106176  854070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 14:49:04.127107  854070 ops.go:34] apiserver oom_adj: -16
	I1006 14:49:04.127119  854070 kubeadm.go:601] duration metric: took 13.420418925s to restartPrimaryControlPlane
	I1006 14:49:04.127127  854070 kubeadm.go:402] duration metric: took 13.536856366s to StartCluster
	I1006 14:49:04.127142  854070 settings.go:142] acquiring lock: {Name:mk86d6d1803b10e0f74b7ca9be175f37419eb162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:49:04.127214  854070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:49:04.128069  854070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/kubeconfig: {Name:mkd0e7dce0fefee9d8326b7f5e1280f715df58c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:49:04.128340  854070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1006 14:49:04.128537  854070 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:49:04.128571  854070 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:49:04.128627  854070 addons.go:69] Setting storage-provisioner=true in profile "functional-933184"
	I1006 14:49:04.128639  854070 addons.go:238] Setting addon storage-provisioner=true in "functional-933184"
	W1006 14:49:04.128644  854070 addons.go:247] addon storage-provisioner should already be in state true
	I1006 14:49:04.128665  854070 host.go:66] Checking if "functional-933184" exists ...
	I1006 14:49:04.128773  854070 addons.go:69] Setting default-storageclass=true in profile "functional-933184"
	I1006 14:49:04.128791  854070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-933184"
	I1006 14:49:04.129107  854070 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
	I1006 14:49:04.129111  854070 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
	I1006 14:49:04.135313  854070 out.go:179] * Verifying Kubernetes components...
	I1006 14:49:04.138343  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:49:04.164970  854070 addons.go:238] Setting addon default-storageclass=true in "functional-933184"
	W1006 14:49:04.164981  854070 addons.go:247] addon default-storageclass should already be in state true
	I1006 14:49:04.165006  854070 host.go:66] Checking if "functional-933184" exists ...
	I1006 14:49:04.165436  854070 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
	I1006 14:49:04.169406  854070 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:49:04.172372  854070 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:49:04.172384  854070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:49:04.172453  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:49:04.203925  854070 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:49:04.203938  854070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:49:04.204143  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:49:04.207020  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:49:04.249369  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:49:04.466542  854070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:49:04.476665  854070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:49:04.528320  854070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:49:05.243502  854070 node_ready.go:35] waiting up to 6m0s for node "functional-933184" to be "Ready" ...
	I1006 14:49:05.246400  854070 node_ready.go:49] node "functional-933184" is "Ready"
	I1006 14:49:05.246416  854070 node_ready.go:38] duration metric: took 2.88453ms for node "functional-933184" to be "Ready" ...
	I1006 14:49:05.246431  854070 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:49:05.246496  854070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:49:05.257640  854070 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1006 14:49:05.260712  854070 addons.go:514] duration metric: took 1.132109358s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1006 14:49:05.262734  854070 api_server.go:72] duration metric: took 1.134369057s to wait for apiserver process to appear ...
	I1006 14:49:05.262757  854070 api_server.go:88] waiting for apiserver healthz status ...
	I1006 14:49:05.262775  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:05.272656  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1006 14:49:05.273684  854070 api_server.go:141] control plane version: v1.34.1
	I1006 14:49:05.273697  854070 api_server.go:131] duration metric: took 10.934975ms to wait for apiserver health ...
	I1006 14:49:05.273705  854070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 14:49:05.276857  854070 system_pods.go:59] 7 kube-system pods found
	I1006 14:49:05.276876  854070 system_pods.go:61] "coredns-66bc5c9577-9mq5b" [3f6636b3-0de0-4de3-93dd-f948f6c444a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:49:05.276883  854070 system_pods.go:61] "etcd-functional-933184" [683200d7-2d0c-43e1-91bb-476782967ca9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 14:49:05.276892  854070 system_pods.go:61] "kube-apiserver-functional-933184" [1b90d7e7-cce6-416a-9bc7-ce9370628c70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 14:49:05.276897  854070 system_pods.go:61] "kube-controller-manager-functional-933184" [8e62c2c1-bc9a-42b3-be3a-5bbdede385be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 14:49:05.276902  854070 system_pods.go:61] "kube-proxy-zdgg7" [83956645-5857-4429-94f0-9c15888aef56] Running
	I1006 14:49:05.276908  854070 system_pods.go:61] "kube-scheduler-functional-933184" [116b89dc-6a80-489b-b959-c92899fb5d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 14:49:05.276913  854070 system_pods.go:61] "storage-provisioner" [85b5a712-1fa6-4db4-819a-74cb978e3ced] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 14:49:05.276918  854070 system_pods.go:74] duration metric: took 3.208118ms to wait for pod list to return data ...
	I1006 14:49:05.276925  854070 default_sa.go:34] waiting for default service account to be created ...
	I1006 14:49:05.279144  854070 default_sa.go:45] found service account: "default"
	I1006 14:49:05.279157  854070 default_sa.go:55] duration metric: took 2.227542ms for default service account to be created ...
	I1006 14:49:05.279165  854070 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 14:49:05.281979  854070 system_pods.go:86] 7 kube-system pods found
	I1006 14:49:05.281995  854070 system_pods.go:89] "coredns-66bc5c9577-9mq5b" [3f6636b3-0de0-4de3-93dd-f948f6c444a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:49:05.282003  854070 system_pods.go:89] "etcd-functional-933184" [683200d7-2d0c-43e1-91bb-476782967ca9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 14:49:05.282011  854070 system_pods.go:89] "kube-apiserver-functional-933184" [1b90d7e7-cce6-416a-9bc7-ce9370628c70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 14:49:05.282020  854070 system_pods.go:89] "kube-controller-manager-functional-933184" [8e62c2c1-bc9a-42b3-be3a-5bbdede385be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 14:49:05.282024  854070 system_pods.go:89] "kube-proxy-zdgg7" [83956645-5857-4429-94f0-9c15888aef56] Running
	I1006 14:49:05.282030  854070 system_pods.go:89] "kube-scheduler-functional-933184" [116b89dc-6a80-489b-b959-c92899fb5d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 14:49:05.282039  854070 system_pods.go:89] "storage-provisioner" [85b5a712-1fa6-4db4-819a-74cb978e3ced] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 14:49:05.282045  854070 system_pods.go:126] duration metric: took 2.875866ms to wait for k8s-apps to be running ...
	I1006 14:49:05.282052  854070 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 14:49:05.282109  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:49:05.296087  854070 system_svc.go:56] duration metric: took 14.02408ms WaitForService to wait for kubelet
	I1006 14:49:05.296104  854070 kubeadm.go:586] duration metric: took 1.16774243s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:49:05.296121  854070 node_conditions.go:102] verifying NodePressure condition ...
	I1006 14:49:05.299847  854070 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 14:49:05.299872  854070 node_conditions.go:123] node cpu capacity is 2
	I1006 14:49:05.299897  854070 node_conditions.go:105] duration metric: took 3.767328ms to run NodePressure ...
	I1006 14:49:05.299912  854070 start.go:241] waiting for startup goroutines ...
	I1006 14:49:05.299919  854070 start.go:246] waiting for cluster config update ...
	I1006 14:49:05.299930  854070 start.go:255] writing updated cluster config ...
	I1006 14:49:05.300290  854070 ssh_runner.go:195] Run: rm -f paused
	I1006 14:49:05.305653  854070 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:49:05.309970  854070 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9mq5b" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 14:49:07.315269  854070 pod_ready.go:104] pod "coredns-66bc5c9577-9mq5b" is not "Ready", error: <nil>
	I1006 14:49:07.815871  854070 pod_ready.go:94] pod "coredns-66bc5c9577-9mq5b" is "Ready"
	I1006 14:49:07.815887  854070 pod_ready.go:86] duration metric: took 2.505903338s for pod "coredns-66bc5c9577-9mq5b" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:07.818619  854070 pod_ready.go:83] waiting for pod "etcd-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:07.823273  854070 pod_ready.go:94] pod "etcd-functional-933184" is "Ready"
	I1006 14:49:07.823287  854070 pod_ready.go:86] duration metric: took 4.656599ms for pod "etcd-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:07.826051  854070 pod_ready.go:83] waiting for pod "kube-apiserver-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 14:49:09.831884  854070 pod_ready.go:104] pod "kube-apiserver-functional-933184" is not "Ready", error: <nil>
	W1006 14:49:11.831972  854070 pod_ready.go:104] pod "kube-apiserver-functional-933184" is not "Ready", error: <nil>
	W1006 14:49:14.331285  854070 pod_ready.go:104] pod "kube-apiserver-functional-933184" is not "Ready", error: <nil>
	W1006 14:49:16.331367  854070 pod_ready.go:104] pod "kube-apiserver-functional-933184" is not "Ready", error: <nil>
	I1006 14:49:16.832353  854070 pod_ready.go:94] pod "kube-apiserver-functional-933184" is "Ready"
	I1006 14:49:16.832368  854070 pod_ready.go:86] duration metric: took 9.00630456s for pod "kube-apiserver-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:16.834814  854070 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:16.839467  854070 pod_ready.go:94] pod "kube-controller-manager-functional-933184" is "Ready"
	I1006 14:49:16.839480  854070 pod_ready.go:86] duration metric: took 4.653349ms for pod "kube-controller-manager-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:16.841754  854070 pod_ready.go:83] waiting for pod "kube-proxy-zdgg7" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:16.846259  854070 pod_ready.go:94] pod "kube-proxy-zdgg7" is "Ready"
	I1006 14:49:16.846274  854070 pod_ready.go:86] duration metric: took 4.507933ms for pod "kube-proxy-zdgg7" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:16.848715  854070 pod_ready.go:83] waiting for pod "kube-scheduler-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:17.030370  854070 pod_ready.go:94] pod "kube-scheduler-functional-933184" is "Ready"
	I1006 14:49:17.030384  854070 pod_ready.go:86] duration metric: took 181.658137ms for pod "kube-scheduler-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:17.030396  854070 pod_ready.go:40] duration metric: took 11.724706092s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:49:17.083249  854070 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 14:49:17.086520  854070 out.go:179] * Done! kubectl is now configured to use "functional-933184" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 06 14:49:34 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:49:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c2dfcec6aaeee6def3d791f41bc50d6ec2b79055527656c1e146e0104a7a2f16/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:49:34 functional-933184 dockerd[6746]: time="2025-10-06T14:49:34.423869371Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:49:34 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:49:34Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 06 14:49:36 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:49:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/37b0bf96d0811dd5803e487c68f0cb2928340eac7f232dbe66cfa7778489622a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:49:37 functional-933184 dockerd[6746]: time="2025-10-06T14:49:37.271565170Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:49:46 functional-933184 dockerd[6746]: time="2025-10-06T14:49:46.509312097Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:49:51 functional-933184 dockerd[6746]: time="2025-10-06T14:49:51.494347395Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:50:14 functional-933184 dockerd[6746]: time="2025-10-06T14:50:14.503316691Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:50:16 functional-933184 dockerd[6746]: time="2025-10-06T14:50:16.507032785Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:50:56 functional-933184 dockerd[6746]: time="2025-10-06T14:50:56.504793642Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:51:08 functional-933184 dockerd[6746]: time="2025-10-06T14:51:08.585253608Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:51:08 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:51:08Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Oct 06 14:52:18 functional-933184 dockerd[6746]: time="2025-10-06T14:52:18.488644772Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:52:40 functional-933184 dockerd[6746]: time="2025-10-06T14:52:40.520546528Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:53:36 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:53:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f382d636608f6c2828651a541e2fb5c963f2edca19189e6994495c8f08a51a6b/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:53:37 functional-933184 dockerd[6746]: time="2025-10-06T14:53:37.273001649Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:53:37 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:53:37Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Oct 06 14:53:53 functional-933184 dockerd[6746]: time="2025-10-06T14:53:53.503584523Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:54:20 functional-933184 dockerd[6746]: time="2025-10-06T14:54:20.516290006Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:55:06 functional-933184 dockerd[6746]: time="2025-10-06T14:55:06.596282520Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:55:06 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:55:06Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 06 14:55:09 functional-933184 dockerd[6746]: time="2025-10-06T14:55:09.486673197Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:55:30 functional-933184 dockerd[6746]: time="2025-10-06T14:55:30.525893001Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:56:42 functional-933184 dockerd[6746]: time="2025-10-06T14:56:42.504090409Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:59:33 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:59:33Z" level=info msg="Stop pulling image kicbase/echo-server:latest: Status: Downloaded newer image for kicbase/echo-server:latest"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                         CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9222476e66295       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   5 seconds ago       Running             echo-server               0                   f382d636608f6       hello-node-75c85bcc94-v749q                 default
	6d9ba03b8863c       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                 10 minutes ago      Running             nginx                     0                   b86fa3d466cfc       nginx-svc                                   default
	67fbd4a16067e       ba04bb24b9575                                                                                 10 minutes ago      Running             storage-provisioner       4                   154edd9f915d2       storage-provisioner                         kube-system
	2e9d6beb6e280       ba04bb24b9575                                                                                 10 minutes ago      Exited              storage-provisioner       3                   154edd9f915d2       storage-provisioner                         kube-system
	156725d59efd7       05baa95f5142d                                                                                 10 minutes ago      Running             kube-proxy                3                   21b4db5f04536       kube-proxy-zdgg7                            kube-system
	bd006eabbe87b       138784d87c9c5                                                                                 10 minutes ago      Running             coredns                   2                   a3bbd10247337       coredns-66bc5c9577-9mq5b                    kube-system
	6fcdf6f551c14       43911e833d64d                                                                                 10 minutes ago      Running             kube-apiserver            0                   89e3edb922f15       kube-apiserver-functional-933184            kube-system
	8e509ed52ab67       b5f57ec6b9867                                                                                 10 minutes ago      Running             kube-scheduler            3                   feb368f89cc4e       kube-scheduler-functional-933184            kube-system
	bcb39dc782d61       7eb2c6ff0c5a7                                                                                 10 minutes ago      Running             kube-controller-manager   3                   c226e4161bd80       kube-controller-manager-functional-933184   kube-system
	ab99eb78d7130       a1894772a478e                                                                                 10 minutes ago      Running             etcd                      2                   ecf4d7659f06c       etcd-functional-933184                      kube-system
	f0543848bada1       b5f57ec6b9867                                                                                 10 minutes ago      Exited              kube-scheduler            2                   47f1155aea2db       kube-scheduler-functional-933184            kube-system
	dbf089d9c53ce       05baa95f5142d                                                                                 10 minutes ago      Exited              kube-proxy                2                   d73913b2b29fc       kube-proxy-zdgg7                            kube-system
	550fc01c34458       7eb2c6ff0c5a7                                                                                 10 minutes ago      Exited              kube-controller-manager   2                   ec8bd41a3bb5b       kube-controller-manager-functional-933184   kube-system
	427dc6f962780       138784d87c9c5                                                                                 11 minutes ago      Exited              coredns                   1                   ac7201a1c2c4f       coredns-66bc5c9577-9mq5b                    kube-system
	402bdb9bee67e       a1894772a478e                                                                                 11 minutes ago      Exited              etcd                      1                   bf3ae8b955f98       etcd-functional-933184                      kube-system
	
	
	==> coredns [427dc6f96278] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38471 - 40348 "HINFO IN 1928811608007205393.1693971363683984954. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014901569s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bd006eabbe87] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43853 - 59172 "HINFO IN 6573570928990532390.5180121269576993269. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013433314s
	
	
	==> describe nodes <==
	Name:               functional-933184
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-933184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=functional-933184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T14_46_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 14:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-933184
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 14:59:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 14:58:03 +0000   Mon, 06 Oct 2025 14:46:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 14:58:03 +0000   Mon, 06 Oct 2025 14:46:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 14:58:03 +0000   Mon, 06 Oct 2025 14:46:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 14:58:03 +0000   Mon, 06 Oct 2025 14:46:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-933184
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9ab0fdbff02456391dde75296bb36e5
	  System UUID:                9a0c63bd-fa52-4df3-ab5b-d64d258d24eb
	  Boot ID:                    2fc2fcec-a145-448c-8b5d-9e614a6ff2df
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-v749q                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     hello-node-connect-7d85dfc575-8vhg5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-9mq5b                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-933184                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-933184             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-933184    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-zdgg7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-933184             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-933184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-933184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node functional-933184 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-933184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-933184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-933184 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-933184 event: Registered Node functional-933184 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-933184 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-933184 event: Registered Node functional-933184 in Controller
	  Warning  ContainerGCFailed        11m (x2 over 12m)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-933184 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-933184 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-933184 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-933184 event: Registered Node functional-933184 in Controller
	
	
	==> dmesg <==
	[Oct 6 12:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 13:11] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 14:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [402bdb9bee67] <==
	{"level":"warn","ts":"2025-10-06T14:47:57.087762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.109164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.128203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.162961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.174001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.196398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.300785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58190","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T14:48:37.808552Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-06T14:48:37.808609Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-933184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-06T14:48:37.808717Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T14:48:44.815049Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T14:48:44.815130Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T14:48:44.815150Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-06T14:48:44.815235Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-06T14:48:44.815247Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-06T14:48:44.817595Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T14:48:44.817718Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T14:48:44.817751Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-06T14:48:44.817963Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T14:48:44.818014Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T14:48:44.818047Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T14:48:44.822615Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-06T14:48:44.822718Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T14:48:44.822982Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-06T14:48:44.823005Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-933184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ab99eb78d713] <==
	{"level":"warn","ts":"2025-10-06T14:49:00.396546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.422633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.466690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.506841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.532156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.559195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.584785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.615586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.650951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.680894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.705590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.745295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.767876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.793778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.850829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.901664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.924749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.954731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.988428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:01.014783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:01.047809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:01.169500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45350","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T14:58:58.936061Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1136}
	{"level":"info","ts":"2025-10-06T14:58:58.959169Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1136,"took":"22.758864ms","hash":3482489076,"current-db-size-bytes":3203072,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1490944,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-06T14:58:58.959218Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3482489076,"revision":1136,"compact-revision":-1}
	
	
	==> kernel <==
	 14:59:38 up 21:42,  0 user,  load average: 0.05, 0.26, 0.82
	Linux functional-933184 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [6fcdf6f551c1] <==
	I1006 14:49:02.309582       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1006 14:49:02.325991       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1006 14:49:02.326044       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1006 14:49:02.326090       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 14:49:02.326339       1 aggregator.go:171] initial CRD sync complete...
	I1006 14:49:02.326353       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 14:49:02.326360       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 14:49:02.326366       1 cache.go:39] Caches are synced for autoregister controller
	I1006 14:49:02.345527       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 14:49:02.365629       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1006 14:49:02.405179       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 14:49:02.916474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1006 14:49:03.329580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1006 14:49:03.331065       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 14:49:03.337228       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 14:49:03.951315       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 14:49:03.989764       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1006 14:49:04.028675       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 14:49:04.038814       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 14:49:05.810549       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 14:49:19.891278       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.228.43"}
	I1006 14:49:27.023450       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.166.75"}
	I1006 14:49:36.614963       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.97.222"}
	I1006 14:53:36.527959       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.124.112"}
	I1006 14:59:02.265017       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [550fc01c3445] <==
	
	
	==> kube-controller-manager [bcb39dc782d6] <==
	I1006 14:49:05.469490       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 14:49:05.470749       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1006 14:49:05.471955       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1006 14:49:05.476205       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1006 14:49:05.481848       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1006 14:49:05.482922       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 14:49:05.484280       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 14:49:05.488298       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 14:49:05.490838       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1006 14:49:05.491078       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 14:49:05.496597       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 14:49:05.499335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1006 14:49:05.501471       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1006 14:49:05.501811       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 14:49:05.502067       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 14:49:05.502221       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 14:49:05.502441       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1006 14:49:05.504112       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1006 14:49:05.504431       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 14:49:05.504694       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1006 14:49:05.504891       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1006 14:49:05.505339       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1006 14:49:05.511446       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1006 14:49:05.514540       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 14:49:05.518161       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [156725d59efd] <==
	I1006 14:49:03.079329       1 server_linux.go:53] "Using iptables proxy"
	I1006 14:49:03.275374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 14:49:03.380514       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 14:49:03.380553       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1006 14:49:03.380667       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 14:49:03.400464       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 14:49:03.400701       1 server_linux.go:132] "Using iptables Proxier"
	I1006 14:49:03.404946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 14:49:03.405244       1 server.go:527] "Version info" version="v1.34.1"
	I1006 14:49:03.405267       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:49:03.406398       1 config.go:200] "Starting service config controller"
	I1006 14:49:03.406417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 14:49:03.415411       1 config.go:309] "Starting node config controller"
	I1006 14:49:03.415431       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 14:49:03.415440       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 14:49:03.415867       1 config.go:106] "Starting endpoint slice config controller"
	I1006 14:49:03.415885       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 14:49:03.415900       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 14:49:03.415904       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 14:49:03.506935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 14:49:03.516366       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 14:49:03.516382       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dbf089d9c53c] <==
	
	
	==> kube-scheduler [8e509ed52ab6] <==
	I1006 14:49:01.155166       1 serving.go:386] Generated self-signed cert in-memory
	I1006 14:49:02.830831       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 14:49:02.830869       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:49:02.837907       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 14:49:02.837965       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 14:49:02.838019       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:49:02.838034       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:49:02.838050       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 14:49:02.838062       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 14:49:02.838500       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 14:49:02.838651       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 14:49:02.938942       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 14:49:02.939304       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 14:49:02.939325       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f0543848bada] <==
	
	
	==> kubelet <==
	Oct 06 14:57:51 functional-933184 kubelet[9144]: E1006 14:57:51.283035    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-v749q" podUID="57f45961-3c12-411b-8d51-7296ab506a54"
	Oct 06 14:57:55 functional-933184 kubelet[9144]: E1006 14:57:55.282438    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:58:04 functional-933184 kubelet[9144]: E1006 14:58:04.283115    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:58:04 functional-933184 kubelet[9144]: E1006 14:58:04.283771    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-v749q" podUID="57f45961-3c12-411b-8d51-7296ab506a54"
	Oct 06 14:58:10 functional-933184 kubelet[9144]: E1006 14:58:10.282735    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:58:18 functional-933184 kubelet[9144]: E1006 14:58:18.282626    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:58:19 functional-933184 kubelet[9144]: E1006 14:58:19.282701    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-v749q" podUID="57f45961-3c12-411b-8d51-7296ab506a54"
	Oct 06 14:58:21 functional-933184 kubelet[9144]: E1006 14:58:21.283547    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:58:30 functional-933184 kubelet[9144]: E1006 14:58:30.282655    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-v749q" podUID="57f45961-3c12-411b-8d51-7296ab506a54"
	Oct 06 14:58:32 functional-933184 kubelet[9144]: E1006 14:58:32.283628    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:58:33 functional-933184 kubelet[9144]: E1006 14:58:33.282281    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:58:41 functional-933184 kubelet[9144]: E1006 14:58:41.283079    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-v749q" podUID="57f45961-3c12-411b-8d51-7296ab506a54"
	Oct 06 14:58:44 functional-933184 kubelet[9144]: E1006 14:58:44.283347    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:58:44 functional-933184 kubelet[9144]: E1006 14:58:44.284114    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:58:55 functional-933184 kubelet[9144]: E1006 14:58:55.282978    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-v749q" podUID="57f45961-3c12-411b-8d51-7296ab506a54"
	Oct 06 14:58:56 functional-933184 kubelet[9144]: E1006 14:58:56.283749    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:58:57 functional-933184 kubelet[9144]: E1006 14:58:57.283020    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:59:08 functional-933184 kubelet[9144]: E1006 14:59:08.282920    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-v749q" podUID="57f45961-3c12-411b-8d51-7296ab506a54"
	Oct 06 14:59:09 functional-933184 kubelet[9144]: E1006 14:59:09.283039    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:59:11 functional-933184 kubelet[9144]: E1006 14:59:11.283275    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:59:21 functional-933184 kubelet[9144]: E1006 14:59:21.282834    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-v749q" podUID="57f45961-3c12-411b-8d51-7296ab506a54"
	Oct 06 14:59:23 functional-933184 kubelet[9144]: E1006 14:59:23.282517    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:59:23 functional-933184 kubelet[9144]: E1006 14:59:23.283020    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:59:35 functional-933184 kubelet[9144]: E1006 14:59:35.283056    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:59:36 functional-933184 kubelet[9144]: E1006 14:59:36.283863    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	
	
	==> storage-provisioner [2e9d6beb6e28] <==
	I1006 14:49:02.934187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 14:49:02.938570       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [67fbd4a16067] <==
	W1006 14:59:13.632492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:15.635218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:15.642221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:17.645229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:17.650750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:19.654156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:19.658703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:21.662050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:21.669072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:23.672227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:23.676647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:25.680197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:25.687456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:27.690319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:27.695335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:29.698613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:29.703288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:31.706094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:31.713240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:33.716068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:33.722004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:35.725506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:35.730251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:37.734354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:59:37.741574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-933184 -n functional-933184
helpers_test.go:269: (dbg) Run:  kubectl --context functional-933184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-connect-7d85dfc575-8vhg5 sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-933184 describe pod hello-node-connect-7d85dfc575-8vhg5 sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-933184 describe pod hello-node-connect-7d85dfc575-8vhg5 sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-connect-7d85dfc575-8vhg5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933184/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:49:36 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8hlxw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8hlxw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-8vhg5 to functional-933184
	  Warning  Failed     8m31s                 kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m59s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m59s (x4 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m59s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m2s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933184/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:49:33 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4dbq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-p4dbq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-933184
	  Warning  Failed     10m                    kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m21s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m21s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     7m21s (x4 over 9m53s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    3s (x44 over 10m)      kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     3s (x44 over 10m)      kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.30s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (249.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [85b5a712-1fa6-4db4-819a-74cb978e3ced] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003693162s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-933184 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-933184 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-933184 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-933184 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [946eca8a-de0f-49f0-9a33-e2841725c94c] Pending
helpers_test.go:352: "sp-pod" [946eca8a-de0f-49f0-9a33-e2841725c94c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-933184 -n functional-933184
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-06 14:53:33.931816159 +0000 UTC m=+1992.729684938
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-933184 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-933184 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-933184/192.168.49.2
Start Time:       Mon, 06 Oct 2025 14:49:33 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4dbq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-p4dbq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/sp-pod to functional-933184
Warning  Failed     4m                   kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    76s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     76s (x5 over 4m)     kubelet            Error: ErrImagePull
Warning  Failed     76s (x4 over 3m48s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     16s (x15 over 4m)    kubelet            Error: ImagePullBackOff
Normal   BackOff    4s (x16 over 4m)     kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-933184 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-933184 logs sp-pod -n default: exit status 1 (110.44066ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-933184 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-933184
helpers_test.go:243: (dbg) docker inspect functional-933184:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501",
	        "Created": "2025-10-06T14:46:07.46263544Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 846873,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-06T14:46:07.527334264Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/hostname",
	        "HostsPath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/hosts",
	        "LogPath": "/var/lib/docker/containers/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501/5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501-json.log",
	        "Name": "/functional-933184",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-933184:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-933184",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5fffa5167caabac45ba4a93c231b27c5ba8e17b773fe59ed3dba0f924c646501",
	                "LowerDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8-init/diff:/var/lib/docker/overlay2/e377610d56c190eb4e6f5af0c002c2b677875f0d15e22ba07535ade05d2c2018/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f1d8f9c0ac824372816d2521cb60af248e50c9535b57a7e27ba22c3603fcce8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-933184",
	                "Source": "/var/lib/docker/volumes/functional-933184/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-933184",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-933184",
	                "name.minikube.sigs.k8s.io": "functional-933184",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a4c74fefb54016d9db8a4692ad25d486a608673942af5fac2a3ceb965acb0bf5",
	            "SandboxKey": "/var/run/docker/netns/a4c74fefb540",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37516"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37517"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37520"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37518"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37519"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-933184": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:6c:0f:8d:60:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4301fd96426a7648713373a63590dd66ad770b3f3e9d1c28d9ad21b65bbabb96",
	                    "EndpointID": "cd029ae863a08454ef7f268f8582fe95c68cbe2ca1d8e537bad3d56c5c93c68b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-933184",
	                        "5fffa5167caa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-933184 -n functional-933184
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-933184 logs -n 25: (1.178606362s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-933184 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                  │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │ 06 Oct 25 14:48 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │ 06 Oct 25 14:48 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                      │ minikube          │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │ 06 Oct 25 14:48 UTC │
	│ kubectl │ functional-933184 kubectl -- --context functional-933184 get pods                                                        │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │ 06 Oct 25 14:48 UTC │
	│ start   │ -p functional-933184 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                 │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:48 UTC │ 06 Oct 25 14:49 UTC │
	│ service │ invalid-svc -p functional-933184                                                                                         │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ cp      │ functional-933184 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                       │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ config  │ functional-933184 config unset cpus                                                                                      │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ config  │ functional-933184 config get cpus                                                                                        │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ config  │ functional-933184 config set cpus 2                                                                                      │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ config  │ functional-933184 config get cpus                                                                                        │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ ssh     │ functional-933184 ssh -n functional-933184 sudo cat /home/docker/cp-test.txt                                             │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ config  │ functional-933184 config unset cpus                                                                                      │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ config  │ functional-933184 config get cpus                                                                                        │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ ssh     │ functional-933184 ssh echo hello                                                                                         │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ cp      │ functional-933184 cp functional-933184:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd11068724/001/cp-test.txt │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ ssh     │ functional-933184 ssh cat /etc/hostname                                                                                  │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ ssh     │ functional-933184 ssh -n functional-933184 sudo cat /home/docker/cp-test.txt                                             │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ tunnel  │ functional-933184 tunnel --alsologtostderr                                                                               │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ tunnel  │ functional-933184 tunnel --alsologtostderr                                                                               │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ cp      │ functional-933184 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ ssh     │ functional-933184 ssh -n functional-933184 sudo cat /tmp/does/not/exist/cp-test.txt                                      │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ tunnel  │ functional-933184 tunnel --alsologtostderr                                                                               │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │                     │
	│ addons  │ functional-933184 addons list                                                                                            │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	│ addons  │ functional-933184 addons list -o json                                                                                    │ functional-933184 │ jenkins │ v1.37.0 │ 06 Oct 25 14:49 UTC │ 06 Oct 25 14:49 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:48:18
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:48:18.880198  854070 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:48:18.889828  854070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:48:18.889883  854070 out.go:374] Setting ErrFile to fd 2...
	I1006 14:48:18.889892  854070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:48:18.890285  854070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 14:48:18.890787  854070 out.go:368] Setting JSON to false
	I1006 14:48:18.892193  854070 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":77451,"bootTime":1759684648,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1006 14:48:18.892298  854070 start.go:140] virtualization:  
	I1006 14:48:18.895789  854070 out.go:179] * [functional-933184] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 14:48:18.899573  854070 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:48:18.899646  854070 notify.go:220] Checking for updates...
	I1006 14:48:18.905375  854070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:48:18.908334  854070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:48:18.911406  854070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	I1006 14:48:18.914411  854070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 14:48:18.917407  854070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:48:18.920885  854070 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:48:18.921001  854070 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:48:18.947463  854070 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 14:48:18.947580  854070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:48:19.017673  854070 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-06 14:48:19.006342719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:48:19.017765  854070 docker.go:318] overlay module found
	I1006 14:48:19.020757  854070 out.go:179] * Using the docker driver based on existing profile
	I1006 14:48:19.023622  854070 start.go:304] selected driver: docker
	I1006 14:48:19.023631  854070 start.go:924] validating driver "docker" against &{Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:48:19.023806  854070 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:48:19.023920  854070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:48:19.087029  854070 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-06 14:48:19.076298504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:48:19.087544  854070 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:48:19.087572  854070 cni.go:84] Creating CNI manager for ""
	I1006 14:48:19.087638  854070 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:48:19.087771  854070 start.go:348] cluster config:
	{Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:48:19.090997  854070 out.go:179] * Starting "functional-933184" primary control-plane node in "functional-933184" cluster
	I1006 14:48:19.093837  854070 cache.go:123] Beginning downloading kic base image for docker with docker
	I1006 14:48:19.096874  854070 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:48:19.099815  854070 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:48:19.099862  854070 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
	I1006 14:48:19.099869  854070 cache.go:58] Caching tarball of preloaded images
	I1006 14:48:19.099960  854070 preload.go:233] Found /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1006 14:48:19.099969  854070 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1006 14:48:19.100080  854070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/config.json ...
	I1006 14:48:19.100309  854070 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:48:19.125246  854070 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1006 14:48:19.125256  854070 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1006 14:48:19.125283  854070 cache.go:232] Successfully downloaded all kic artifacts
	I1006 14:48:19.125311  854070 start.go:360] acquireMachinesLock for functional-933184: {Name:mkca21d6f937ff7127d821d61e44cf2e04756079 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1006 14:48:19.125377  854070 start.go:364] duration metric: took 49.82µs to acquireMachinesLock for "functional-933184"
	I1006 14:48:19.125394  854070 start.go:96] Skipping create...Using existing machine configuration
	I1006 14:48:19.125403  854070 fix.go:54] fixHost starting: 
	I1006 14:48:19.125674  854070 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
	I1006 14:48:19.144035  854070 fix.go:112] recreateIfNeeded on functional-933184: state=Running err=<nil>
	W1006 14:48:19.144065  854070 fix.go:138] unexpected machine state, will restart: <nil>
	I1006 14:48:19.147347  854070 out.go:252] * Updating the running docker "functional-933184" container ...
	I1006 14:48:19.147374  854070 machine.go:93] provisionDockerMachine start ...
	I1006 14:48:19.147456  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:19.164990  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:19.165309  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:19.165316  854070 main.go:141] libmachine: About to run SSH command:
	hostname
	I1006 14:48:19.303353  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-933184
	
	I1006 14:48:19.303367  854070 ubuntu.go:182] provisioning hostname "functional-933184"
	I1006 14:48:19.303439  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:19.324077  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:19.324406  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:19.324415  854070 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-933184 && echo "functional-933184" | sudo tee /etc/hostname
	I1006 14:48:19.469521  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-933184
	
	I1006 14:48:19.469596  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:19.496302  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:19.496607  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:19.496628  854070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-933184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-933184/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-933184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1006 14:48:19.632163  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:48:19.632178  854070 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21701-803497/.minikube CaCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21701-803497/.minikube}
	I1006 14:48:19.632194  854070 ubuntu.go:190] setting up certificates
	I1006 14:48:19.632203  854070 provision.go:84] configureAuth start
	I1006 14:48:19.632263  854070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-933184
	I1006 14:48:19.649796  854070 provision.go:143] copyHostCerts
	I1006 14:48:19.649853  854070 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-803497/.minikube/ca.pem, removing ...
	I1006 14:48:19.649868  854070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-803497/.minikube/ca.pem
	I1006 14:48:19.649944  854070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/ca.pem (1082 bytes)
	I1006 14:48:19.650054  854070 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-803497/.minikube/cert.pem, removing ...
	I1006 14:48:19.650058  854070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-803497/.minikube/cert.pem
	I1006 14:48:19.650083  854070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/cert.pem (1123 bytes)
	I1006 14:48:19.650144  854070 exec_runner.go:144] found /home/jenkins/minikube-integration/21701-803497/.minikube/key.pem, removing ...
	I1006 14:48:19.650147  854070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21701-803497/.minikube/key.pem
	I1006 14:48:19.650168  854070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21701-803497/.minikube/key.pem (1675 bytes)
	I1006 14:48:19.650216  854070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem org=jenkins.functional-933184 san=[127.0.0.1 192.168.49.2 functional-933184 localhost minikube]
	I1006 14:48:19.777745  854070 provision.go:177] copyRemoteCerts
	I1006 14:48:19.777796  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1006 14:48:19.777845  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:19.796089  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:48:19.898424  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1006 14:48:19.919997  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1006 14:48:19.938457  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1006 14:48:19.956564  854070 provision.go:87] duration metric: took 324.347297ms to configureAuth
	I1006 14:48:19.956580  854070 ubuntu.go:206] setting minikube options for container-runtime
	I1006 14:48:19.956778  854070 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:48:19.956827  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:19.974250  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:19.974609  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:19.974617  854070 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1006 14:48:20.125001  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1006 14:48:20.125013  854070 ubuntu.go:71] root file system type: overlay
	I1006 14:48:20.125168  854070 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1006 14:48:20.125241  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.144858  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:20.145159  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:20.145234  854070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1006 14:48:20.289727  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1006 14:48:20.289816  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.309957  854070 main.go:141] libmachine: Using SSH client type: native
	I1006 14:48:20.310249  854070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 37516 <nil> <nil>}
	I1006 14:48:20.310264  854070 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1006 14:48:20.456427  854070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1006 14:48:20.456440  854070 machine.go:96] duration metric: took 1.309058805s to provisionDockerMachine
	I1006 14:48:20.456449  854070 start.go:293] postStartSetup for "functional-933184" (driver="docker")
	I1006 14:48:20.456458  854070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1006 14:48:20.456541  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1006 14:48:20.456580  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.474077  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:48:20.571781  854070 ssh_runner.go:195] Run: cat /etc/os-release
	I1006 14:48:20.575141  854070 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1006 14:48:20.575160  854070 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1006 14:48:20.575170  854070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/addons for local assets ...
	I1006 14:48:20.575232  854070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21701-803497/.minikube/files for local assets ...
	I1006 14:48:20.575312  854070 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/ssl/certs/8053512.pem -> 8053512.pem in /etc/ssl/certs
	I1006 14:48:20.575389  854070 filesync.go:149] local asset: /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/test/nested/copy/805351/hosts -> hosts in /etc/test/nested/copy/805351
	I1006 14:48:20.575439  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/805351
	I1006 14:48:20.583255  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/ssl/certs/8053512.pem --> /etc/ssl/certs/8053512.pem (1708 bytes)
	I1006 14:48:20.601564  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/test/nested/copy/805351/hosts --> /etc/test/nested/copy/805351/hosts (40 bytes)
	I1006 14:48:20.619648  854070 start.go:296] duration metric: took 163.185604ms for postStartSetup
	I1006 14:48:20.619764  854070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 14:48:20.619802  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.636974  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:48:20.729447  854070 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1006 14:48:20.734813  854070 fix.go:56] duration metric: took 1.609406865s for fixHost
	I1006 14:48:20.734828  854070 start.go:83] releasing machines lock for "functional-933184", held for 1.609444328s
	I1006 14:48:20.734896  854070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-933184
	I1006 14:48:20.751967  854070 ssh_runner.go:195] Run: cat /version.json
	I1006 14:48:20.752025  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.752308  854070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1006 14:48:20.752360  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:48:20.781812  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:48:20.792778  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:48:20.888346  854070 ssh_runner.go:195] Run: systemctl --version
	I1006 14:48:21.032661  854070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1006 14:48:21.037194  854070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1006 14:48:21.037256  854070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1006 14:48:21.045539  854070 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1006 14:48:21.045556  854070 start.go:495] detecting cgroup driver to use...
	I1006 14:48:21.045589  854070 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:48:21.045692  854070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:48:21.060738  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1006 14:48:21.070442  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1006 14:48:21.079988  854070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1006 14:48:21.080062  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1006 14:48:21.089988  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:48:21.099219  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1006 14:48:21.108899  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1006 14:48:21.118826  854070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1006 14:48:21.127804  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1006 14:48:21.137959  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1006 14:48:21.147314  854070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1006 14:48:21.161424  854070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1006 14:48:21.169648  854070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1006 14:48:21.177481  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:21.321063  854070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1006 14:48:21.566626  854070 start.go:495] detecting cgroup driver to use...
	I1006 14:48:21.566677  854070 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1006 14:48:21.566734  854070 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1006 14:48:21.591258  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:48:21.605453  854070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1006 14:48:21.643429  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1006 14:48:21.658840  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1006 14:48:21.674415  854070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1006 14:48:21.691411  854070 ssh_runner.go:195] Run: which cri-dockerd
	I1006 14:48:21.695512  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1006 14:48:21.703524  854070 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1006 14:48:21.717517  854070 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1006 14:48:21.865866  854070 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1006 14:48:22.020236  854070 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1006 14:48:22.020319  854070 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1006 14:48:22.037261  854070 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1006 14:48:22.050632  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:22.216231  854070 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1006 14:48:48.121717  854070 ssh_runner.go:235] Completed: sudo systemctl restart docker: (25.905463969s)
	I1006 14:48:48.121779  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1006 14:48:48.138376  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1006 14:48:48.160747  854070 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1006 14:48:48.191436  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:48:48.204419  854070 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1006 14:48:48.331884  854070 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1006 14:48:48.447368  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:48.574130  854070 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1006 14:48:48.589966  854070 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1006 14:48:48.603625  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:48.731994  854070 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1006 14:48:48.831628  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1006 14:48:48.845580  854070 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1006 14:48:48.845636  854070 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1006 14:48:48.849656  854070 start.go:563] Will wait 60s for crictl version
	I1006 14:48:48.849712  854070 ssh_runner.go:195] Run: which crictl
	I1006 14:48:48.854548  854070 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1006 14:48:48.879534  854070 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I1006 14:48:48.879591  854070 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:48:48.903115  854070 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1006 14:48:48.929131  854070 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.4.0 ...
	I1006 14:48:48.929231  854070 cli_runner.go:164] Run: docker network inspect functional-933184 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1006 14:48:48.946082  854070 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1006 14:48:48.953512  854070 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1006 14:48:48.956232  854070 kubeadm.go:883] updating cluster {Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1006 14:48:48.956349  854070 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1006 14:48:48.956419  854070 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:48:48.975480  854070 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-933184
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1006 14:48:48.975493  854070 docker.go:621] Images already preloaded, skipping extraction
	I1006 14:48:48.975556  854070 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1006 14:48:48.995939  854070 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-933184
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1006 14:48:48.995953  854070 cache_images.go:85] Images are preloaded, skipping loading
	I1006 14:48:48.995961  854070 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 docker true true} ...
	I1006 14:48:48.996064  854070 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-933184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1006 14:48:48.996130  854070 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1006 14:48:49.054160  854070 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1006 14:48:49.054181  854070 cni.go:84] Creating CNI manager for ""
	I1006 14:48:49.054201  854070 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:48:49.054211  854070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1006 14:48:49.054239  854070 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-933184 NodeName:functional-933184 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:
map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1006 14:48:49.054358  854070 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-933184"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1006 14:48:49.054420  854070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1006 14:48:49.062530  854070 binaries.go:44] Found k8s binaries, skipping transfer
	I1006 14:48:49.062589  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1006 14:48:49.070333  854070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1006 14:48:49.083561  854070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1006 14:48:49.096903  854070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2068 bytes)
	I1006 14:48:49.110517  854070 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1006 14:48:49.114658  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:48:49.246029  854070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:48:49.267129  854070 certs.go:69] Setting up /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184 for IP: 192.168.49.2
	I1006 14:48:49.267139  854070 certs.go:195] generating shared ca certs ...
	I1006 14:48:49.267154  854070 certs.go:227] acquiring lock for ca certs: {Name:mk78547ccc35462965e66385811a001935f7f131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:48:49.267300  854070 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key
	I1006 14:48:49.267340  854070 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key
	I1006 14:48:49.267346  854070 certs.go:257] generating profile certs ...
	I1006 14:48:49.267432  854070 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.key
	I1006 14:48:49.267478  854070 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/apiserver.key.4a9bd7a8
	I1006 14:48:49.267511  854070 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/proxy-client.key
	I1006 14:48:49.267634  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/805351.pem (1338 bytes)
	W1006 14:48:49.267674  854070 certs.go:480] ignoring /home/jenkins/minikube-integration/21701-803497/.minikube/certs/805351_empty.pem, impossibly tiny 0 bytes
	I1006 14:48:49.267682  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca-key.pem (1675 bytes)
	I1006 14:48:49.267711  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/ca.pem (1082 bytes)
	I1006 14:48:49.267734  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/cert.pem (1123 bytes)
	I1006 14:48:49.267753  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/certs/key.pem (1675 bytes)
	I1006 14:48:49.267805  854070 certs.go:484] found cert: /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/ssl/certs/8053512.pem (1708 bytes)
	I1006 14:48:49.268391  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1006 14:48:49.297040  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1006 14:48:49.325986  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1006 14:48:49.352372  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1006 14:48:49.380471  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1006 14:48:49.400124  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1006 14:48:49.430696  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1006 14:48:49.474753  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1006 14:48:49.514435  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/ssl/certs/8053512.pem --> /usr/share/ca-certificates/8053512.pem (1708 bytes)
	I1006 14:48:49.551654  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1006 14:48:49.618406  854070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21701-803497/.minikube/certs/805351.pem --> /usr/share/ca-certificates/805351.pem (1338 bytes)
	I1006 14:48:49.654754  854070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1006 14:48:49.671554  854070 ssh_runner.go:195] Run: openssl version
	I1006 14:48:49.679304  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1006 14:48:49.698649  854070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:48:49.707998  854070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  6 14:21 /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:48:49.708053  854070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1006 14:48:49.769857  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1006 14:48:49.782522  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/805351.pem && ln -fs /usr/share/ca-certificates/805351.pem /etc/ssl/certs/805351.pem"
	I1006 14:48:49.794120  854070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/805351.pem
	I1006 14:48:49.800677  854070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  6 14:46 /usr/share/ca-certificates/805351.pem
	I1006 14:48:49.800748  854070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/805351.pem
	I1006 14:48:49.862677  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/805351.pem /etc/ssl/certs/51391683.0"
	I1006 14:48:49.875313  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8053512.pem && ln -fs /usr/share/ca-certificates/8053512.pem /etc/ssl/certs/8053512.pem"
	I1006 14:48:49.886726  854070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8053512.pem
	I1006 14:48:49.893952  854070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  6 14:46 /usr/share/ca-certificates/8053512.pem
	I1006 14:48:49.894019  854070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8053512.pem
	I1006 14:48:49.976687  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8053512.pem /etc/ssl/certs/3ec20f2e.0"
	I1006 14:48:49.994312  854070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1006 14:48:50.005609  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1006 14:48:50.092712  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1006 14:48:50.195174  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1006 14:48:50.300346  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1006 14:48:50.378562  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1006 14:48:50.446051  854070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1006 14:48:50.590291  854070 kubeadm.go:400] StartCluster: {Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:48:50.590445  854070 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1006 14:48:50.690795  854070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1006 14:48:50.706671  854070 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1006 14:48:50.706694  854070 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1006 14:48:50.706742  854070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1006 14:48:50.718678  854070 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:48:50.719269  854070 kubeconfig.go:125] found "functional-933184" server: "https://192.168.49.2:8441"
	I1006 14:48:50.720993  854070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1006 14:48:50.733066  854070 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-06 14:46:18.452668782 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-06 14:48:49.105743754 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1006 14:48:50.733085  854070 kubeadm.go:1160] stopping kube-system containers ...
	I1006 14:48:50.733157  854070 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1006 14:48:50.781735  854070 docker.go:484] Stopping containers: [f0543848bada dbf089d9c53c 550fc01c3445 5330a4986510 47f1155aea2d 2da3d4d0b60c d73913b2b29f da4175c7d7ad c0787bb0e8f3 ec8bd41a3bb5 cda1ee00f9c1 0cc858d04b2e 427dc6f96278 ac7201a1c2c4 111e36f4b9fd eeb21d45d960 c3d511b79b6c 2f62b1a3dcbf 402bdb9bee67 36f0b465533f 46d7a9bf558b 7e3787177fc4 bf3ae8b955f9 d6dca607e1d2 e72d62df25e7 e36c72ae63db 84869ced6e0e 101d181170e5 f28b9b3c458a 30fc03a7a185 980a2bb4b1ff]
	I1006 14:48:50.781824  854070 ssh_runner.go:195] Run: docker stop f0543848bada dbf089d9c53c 550fc01c3445 5330a4986510 47f1155aea2d 2da3d4d0b60c d73913b2b29f da4175c7d7ad c0787bb0e8f3 ec8bd41a3bb5 cda1ee00f9c1 0cc858d04b2e 427dc6f96278 ac7201a1c2c4 111e36f4b9fd eeb21d45d960 c3d511b79b6c 2f62b1a3dcbf 402bdb9bee67 36f0b465533f 46d7a9bf558b 7e3787177fc4 bf3ae8b955f9 d6dca607e1d2 e72d62df25e7 e36c72ae63db 84869ced6e0e 101d181170e5 f28b9b3c458a 30fc03a7a185 980a2bb4b1ff
	I1006 14:48:52.674535  854070 ssh_runner.go:235] Completed: docker stop f0543848bada dbf089d9c53c 550fc01c3445 5330a4986510 47f1155aea2d 2da3d4d0b60c d73913b2b29f da4175c7d7ad c0787bb0e8f3 ec8bd41a3bb5 cda1ee00f9c1 0cc858d04b2e 427dc6f96278 ac7201a1c2c4 111e36f4b9fd eeb21d45d960 c3d511b79b6c 2f62b1a3dcbf 402bdb9bee67 36f0b465533f 46d7a9bf558b 7e3787177fc4 bf3ae8b955f9 d6dca607e1d2 e72d62df25e7 e36c72ae63db 84869ced6e0e 101d181170e5 f28b9b3c458a 30fc03a7a185 980a2bb4b1ff: (1.892654734s)
	I1006 14:48:52.674604  854070 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1006 14:48:52.798255  854070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1006 14:48:52.823854  854070 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  6 14:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  6 14:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  6 14:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  6 14:46 /etc/kubernetes/scheduler.conf
	
	I1006 14:48:52.823913  854070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1006 14:48:52.844906  854070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1006 14:48:52.871472  854070 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:48:52.871553  854070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1006 14:48:52.892475  854070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.904615  854070 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:48:52.904682  854070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1006 14:48:52.918458  854070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1006 14:48:52.937148  854070 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1006 14:48:52.937219  854070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1006 14:48:52.956287  854070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1006 14:48:52.968354  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:48:53.024185  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:48:55.876937  854070 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.852727077s)
	I1006 14:48:55.876995  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:48:56.110402  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:48:56.177562  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:48:56.252607  854070 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:48:56.252678  854070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:48:56.753096  854070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:48:57.252778  854070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:48:57.752801  854070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:48:57.774902  854070 api_server.go:72] duration metric: took 1.522302209s to wait for apiserver process to appear ...
	I1006 14:48:57.774916  854070 api_server.go:88] waiting for apiserver healthz status ...
	I1006 14:48:57.774938  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:02.047293  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 14:49:02.047318  854070 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 14:49:02.047330  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:02.159621  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1006 14:49:02.159638  854070 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1006 14:49:02.275966  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:02.318215  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 14:49:02.318236  854070 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 14:49:02.775888  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:02.786691  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 14:49:02.786708  854070 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 14:49:03.275979  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:03.289549  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1006 14:49:03.289564  854070 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1006 14:49:03.775159  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:03.783520  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1006 14:49:03.797411  854070 api_server.go:141] control plane version: v1.34.1
	I1006 14:49:03.797427  854070 api_server.go:131] duration metric: took 6.022506122s to wait for apiserver health ...
	I1006 14:49:03.797435  854070 cni.go:84] Creating CNI manager for ""
	I1006 14:49:03.797445  854070 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:49:03.801089  854070 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1006 14:49:03.804103  854070 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1006 14:49:03.812537  854070 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1006 14:49:03.826445  854070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 14:49:03.831717  854070 system_pods.go:59] 7 kube-system pods found
	I1006 14:49:03.831744  854070 system_pods.go:61] "coredns-66bc5c9577-9mq5b" [3f6636b3-0de0-4de3-93dd-f948f6c444a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:49:03.831753  854070 system_pods.go:61] "etcd-functional-933184" [683200d7-2d0c-43e1-91bb-476782967ca9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 14:49:03.831765  854070 system_pods.go:61] "kube-apiserver-functional-933184" [1b90d7e7-cce6-416a-9bc7-ce9370628c70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 14:49:03.831771  854070 system_pods.go:61] "kube-controller-manager-functional-933184" [8e62c2c1-bc9a-42b3-be3a-5bbdede385be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 14:49:03.831777  854070 system_pods.go:61] "kube-proxy-zdgg7" [83956645-5857-4429-94f0-9c15888aef56] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1006 14:49:03.831787  854070 system_pods.go:61] "kube-scheduler-functional-933184" [116b89dc-6a80-489b-b959-c92899fb5d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 14:49:03.831792  854070 system_pods.go:61] "storage-provisioner" [85b5a712-1fa6-4db4-819a-74cb978e3ced] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 14:49:03.831800  854070 system_pods.go:74] duration metric: took 5.344758ms to wait for pod list to return data ...
	I1006 14:49:03.831807  854070 node_conditions.go:102] verifying NodePressure condition ...
	I1006 14:49:03.838718  854070 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 14:49:03.838738  854070 node_conditions.go:123] node cpu capacity is 2
	I1006 14:49:03.838749  854070 node_conditions.go:105] duration metric: took 6.938195ms to run NodePressure ...
	I1006 14:49:03.838808  854070 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1006 14:49:04.101294  854070 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1006 14:49:04.106152  854070 kubeadm.go:743] kubelet initialised
	I1006 14:49:04.106162  854070 kubeadm.go:744] duration metric: took 4.856201ms waiting for restarted kubelet to initialise ...
	I1006 14:49:04.106176  854070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1006 14:49:04.127107  854070 ops.go:34] apiserver oom_adj: -16
	I1006 14:49:04.127119  854070 kubeadm.go:601] duration metric: took 13.420418925s to restartPrimaryControlPlane
	I1006 14:49:04.127127  854070 kubeadm.go:402] duration metric: took 13.536856366s to StartCluster
	I1006 14:49:04.127142  854070 settings.go:142] acquiring lock: {Name:mk86d6d1803b10e0f74b7ca9be175f37419eb162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:49:04.127214  854070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:49:04.128069  854070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/kubeconfig: {Name:mkd0e7dce0fefee9d8326b7f5e1280f715df58c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:49:04.128340  854070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1006 14:49:04.128537  854070 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:49:04.128571  854070 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1006 14:49:04.128627  854070 addons.go:69] Setting storage-provisioner=true in profile "functional-933184"
	I1006 14:49:04.128639  854070 addons.go:238] Setting addon storage-provisioner=true in "functional-933184"
	W1006 14:49:04.128644  854070 addons.go:247] addon storage-provisioner should already be in state true
	I1006 14:49:04.128665  854070 host.go:66] Checking if "functional-933184" exists ...
	I1006 14:49:04.128773  854070 addons.go:69] Setting default-storageclass=true in profile "functional-933184"
	I1006 14:49:04.128791  854070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-933184"
	I1006 14:49:04.129107  854070 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
	I1006 14:49:04.129111  854070 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
	I1006 14:49:04.135313  854070 out.go:179] * Verifying Kubernetes components...
	I1006 14:49:04.138343  854070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1006 14:49:04.164970  854070 addons.go:238] Setting addon default-storageclass=true in "functional-933184"
	W1006 14:49:04.164981  854070 addons.go:247] addon default-storageclass should already be in state true
	I1006 14:49:04.165006  854070 host.go:66] Checking if "functional-933184" exists ...
	I1006 14:49:04.165436  854070 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
	I1006 14:49:04.169406  854070 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1006 14:49:04.172372  854070 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:49:04.172384  854070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1006 14:49:04.172453  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:49:04.203925  854070 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1006 14:49:04.203938  854070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1006 14:49:04.204143  854070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
	I1006 14:49:04.207020  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:49:04.249369  854070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
	I1006 14:49:04.466542  854070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1006 14:49:04.476665  854070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1006 14:49:04.528320  854070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1006 14:49:05.243502  854070 node_ready.go:35] waiting up to 6m0s for node "functional-933184" to be "Ready" ...
	I1006 14:49:05.246400  854070 node_ready.go:49] node "functional-933184" is "Ready"
	I1006 14:49:05.246416  854070 node_ready.go:38] duration metric: took 2.88453ms for node "functional-933184" to be "Ready" ...
	I1006 14:49:05.246431  854070 api_server.go:52] waiting for apiserver process to appear ...
	I1006 14:49:05.246496  854070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 14:49:05.257640  854070 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1006 14:49:05.260712  854070 addons.go:514] duration metric: took 1.132109358s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1006 14:49:05.262734  854070 api_server.go:72] duration metric: took 1.134369057s to wait for apiserver process to appear ...
	I1006 14:49:05.262757  854070 api_server.go:88] waiting for apiserver healthz status ...
	I1006 14:49:05.262775  854070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1006 14:49:05.272656  854070 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1006 14:49:05.273684  854070 api_server.go:141] control plane version: v1.34.1
	I1006 14:49:05.273697  854070 api_server.go:131] duration metric: took 10.934975ms to wait for apiserver health ...
	I1006 14:49:05.273705  854070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1006 14:49:05.276857  854070 system_pods.go:59] 7 kube-system pods found
	I1006 14:49:05.276876  854070 system_pods.go:61] "coredns-66bc5c9577-9mq5b" [3f6636b3-0de0-4de3-93dd-f948f6c444a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:49:05.276883  854070 system_pods.go:61] "etcd-functional-933184" [683200d7-2d0c-43e1-91bb-476782967ca9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 14:49:05.276892  854070 system_pods.go:61] "kube-apiserver-functional-933184" [1b90d7e7-cce6-416a-9bc7-ce9370628c70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 14:49:05.276897  854070 system_pods.go:61] "kube-controller-manager-functional-933184" [8e62c2c1-bc9a-42b3-be3a-5bbdede385be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 14:49:05.276902  854070 system_pods.go:61] "kube-proxy-zdgg7" [83956645-5857-4429-94f0-9c15888aef56] Running
	I1006 14:49:05.276908  854070 system_pods.go:61] "kube-scheduler-functional-933184" [116b89dc-6a80-489b-b959-c92899fb5d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 14:49:05.276913  854070 system_pods.go:61] "storage-provisioner" [85b5a712-1fa6-4db4-819a-74cb978e3ced] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 14:49:05.276918  854070 system_pods.go:74] duration metric: took 3.208118ms to wait for pod list to return data ...
	I1006 14:49:05.276925  854070 default_sa.go:34] waiting for default service account to be created ...
	I1006 14:49:05.279144  854070 default_sa.go:45] found service account: "default"
	I1006 14:49:05.279157  854070 default_sa.go:55] duration metric: took 2.227542ms for default service account to be created ...
	I1006 14:49:05.279165  854070 system_pods.go:116] waiting for k8s-apps to be running ...
	I1006 14:49:05.281979  854070 system_pods.go:86] 7 kube-system pods found
	I1006 14:49:05.281995  854070 system_pods.go:89] "coredns-66bc5c9577-9mq5b" [3f6636b3-0de0-4de3-93dd-f948f6c444a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1006 14:49:05.282003  854070 system_pods.go:89] "etcd-functional-933184" [683200d7-2d0c-43e1-91bb-476782967ca9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1006 14:49:05.282011  854070 system_pods.go:89] "kube-apiserver-functional-933184" [1b90d7e7-cce6-416a-9bc7-ce9370628c70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1006 14:49:05.282020  854070 system_pods.go:89] "kube-controller-manager-functional-933184" [8e62c2c1-bc9a-42b3-be3a-5bbdede385be] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1006 14:49:05.282024  854070 system_pods.go:89] "kube-proxy-zdgg7" [83956645-5857-4429-94f0-9c15888aef56] Running
	I1006 14:49:05.282030  854070 system_pods.go:89] "kube-scheduler-functional-933184" [116b89dc-6a80-489b-b959-c92899fb5d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1006 14:49:05.282039  854070 system_pods.go:89] "storage-provisioner" [85b5a712-1fa6-4db4-819a-74cb978e3ced] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1006 14:49:05.282045  854070 system_pods.go:126] duration metric: took 2.875866ms to wait for k8s-apps to be running ...
	I1006 14:49:05.282052  854070 system_svc.go:44] waiting for kubelet service to be running ....
	I1006 14:49:05.282109  854070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 14:49:05.296087  854070 system_svc.go:56] duration metric: took 14.02408ms WaitForService to wait for kubelet
	I1006 14:49:05.296104  854070 kubeadm.go:586] duration metric: took 1.16774243s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1006 14:49:05.296121  854070 node_conditions.go:102] verifying NodePressure condition ...
	I1006 14:49:05.299847  854070 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1006 14:49:05.299872  854070 node_conditions.go:123] node cpu capacity is 2
	I1006 14:49:05.299897  854070 node_conditions.go:105] duration metric: took 3.767328ms to run NodePressure ...
	I1006 14:49:05.299912  854070 start.go:241] waiting for startup goroutines ...
	I1006 14:49:05.299919  854070 start.go:246] waiting for cluster config update ...
	I1006 14:49:05.299930  854070 start.go:255] writing updated cluster config ...
	I1006 14:49:05.300290  854070 ssh_runner.go:195] Run: rm -f paused
	I1006 14:49:05.305653  854070 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:49:05.309970  854070 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9mq5b" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 14:49:07.315269  854070 pod_ready.go:104] pod "coredns-66bc5c9577-9mq5b" is not "Ready", error: <nil>
	I1006 14:49:07.815871  854070 pod_ready.go:94] pod "coredns-66bc5c9577-9mq5b" is "Ready"
	I1006 14:49:07.815887  854070 pod_ready.go:86] duration metric: took 2.505903338s for pod "coredns-66bc5c9577-9mq5b" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:07.818619  854070 pod_ready.go:83] waiting for pod "etcd-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:07.823273  854070 pod_ready.go:94] pod "etcd-functional-933184" is "Ready"
	I1006 14:49:07.823287  854070 pod_ready.go:86] duration metric: took 4.656599ms for pod "etcd-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:07.826051  854070 pod_ready.go:83] waiting for pod "kube-apiserver-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	W1006 14:49:09.831884  854070 pod_ready.go:104] pod "kube-apiserver-functional-933184" is not "Ready", error: <nil>
	W1006 14:49:11.831972  854070 pod_ready.go:104] pod "kube-apiserver-functional-933184" is not "Ready", error: <nil>
	W1006 14:49:14.331285  854070 pod_ready.go:104] pod "kube-apiserver-functional-933184" is not "Ready", error: <nil>
	W1006 14:49:16.331367  854070 pod_ready.go:104] pod "kube-apiserver-functional-933184" is not "Ready", error: <nil>
	I1006 14:49:16.832353  854070 pod_ready.go:94] pod "kube-apiserver-functional-933184" is "Ready"
	I1006 14:49:16.832368  854070 pod_ready.go:86] duration metric: took 9.00630456s for pod "kube-apiserver-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:16.834814  854070 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:16.839467  854070 pod_ready.go:94] pod "kube-controller-manager-functional-933184" is "Ready"
	I1006 14:49:16.839480  854070 pod_ready.go:86] duration metric: took 4.653349ms for pod "kube-controller-manager-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:16.841754  854070 pod_ready.go:83] waiting for pod "kube-proxy-zdgg7" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:16.846259  854070 pod_ready.go:94] pod "kube-proxy-zdgg7" is "Ready"
	I1006 14:49:16.846274  854070 pod_ready.go:86] duration metric: took 4.507933ms for pod "kube-proxy-zdgg7" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:16.848715  854070 pod_ready.go:83] waiting for pod "kube-scheduler-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:17.030370  854070 pod_ready.go:94] pod "kube-scheduler-functional-933184" is "Ready"
	I1006 14:49:17.030384  854070 pod_ready.go:86] duration metric: took 181.658137ms for pod "kube-scheduler-functional-933184" in "kube-system" namespace to be "Ready" or be gone ...
	I1006 14:49:17.030396  854070 pod_ready.go:40] duration metric: took 11.724706092s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1006 14:49:17.083249  854070 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1006 14:49:17.086520  854070 out.go:179] * Done! kubectl is now configured to use "functional-933184" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 06 14:48:53 functional-933184 cri-dockerd[7516]: W1006 14:48:53.008164    7516 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Oct 06 14:48:57 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:48:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89e3edb922f15f58bcdc692adaddb5ac8adaee9ea2b2f7a2bf8b37446eaaf578/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Oct 06 14:49:02 functional-933184 dockerd[6746]: time="2025-10-06T14:49:02.353178873Z" level=info msg="ignoring event" container=d1042b241d5307e9d326e3a4d6311fa4fd92ee4afffda82026ef261978f06358 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:49:02 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:49:02Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Oct 06 14:49:03 functional-933184 dockerd[6746]: time="2025-10-06T14:49:03.023727591Z" level=info msg="ignoring event" container=2e9d6beb6e280132e0ca12c7ba084687de41e75796d94d2ee5841bd5157d0b6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:49:20 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:49:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41a4f3c848c7235c389e3839007e684640d970ef8cc46622aec78e8e6749f4f5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:49:20 functional-933184 dockerd[6746]: time="2025-10-06T14:49:20.572904349Z" level=error msg="Not continuing with pull after error" error="errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
	Oct 06 14:49:20 functional-933184 dockerd[6746]: time="2025-10-06T14:49:20.572955187Z" level=info msg="Ignoring extra error returned from registry" error="unauthorized: authentication required"
	Oct 06 14:49:23 functional-933184 dockerd[6746]: time="2025-10-06T14:49:23.834196816Z" level=info msg="ignoring event" container=41a4f3c848c7235c389e3839007e684640d970ef8cc46622aec78e8e6749f4f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 06 14:49:27 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:49:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b86fa3d466cfc8acd6a4871997b1b2122139ad662e9a33dcd73c027bc73064e9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:49:29 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:49:29Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Oct 06 14:49:34 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:49:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c2dfcec6aaeee6def3d791f41bc50d6ec2b79055527656c1e146e0104a7a2f16/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:49:34 functional-933184 dockerd[6746]: time="2025-10-06T14:49:34.423869371Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:49:34 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:49:34Z" level=info msg="Stop pulling image docker.io/nginx:latest: latest: Pulling from library/nginx"
	Oct 06 14:49:36 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:49:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/37b0bf96d0811dd5803e487c68f0cb2928340eac7f232dbe66cfa7778489622a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Oct 06 14:49:37 functional-933184 dockerd[6746]: time="2025-10-06T14:49:37.271565170Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:49:46 functional-933184 dockerd[6746]: time="2025-10-06T14:49:46.509312097Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:49:51 functional-933184 dockerd[6746]: time="2025-10-06T14:49:51.494347395Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:50:14 functional-933184 dockerd[6746]: time="2025-10-06T14:50:14.503316691Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:50:16 functional-933184 dockerd[6746]: time="2025-10-06T14:50:16.507032785Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:50:56 functional-933184 dockerd[6746]: time="2025-10-06T14:50:56.504793642Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:51:08 functional-933184 dockerd[6746]: time="2025-10-06T14:51:08.585253608Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:51:08 functional-933184 cri-dockerd[7516]: time="2025-10-06T14:51:08Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Oct 06 14:52:18 functional-933184 dockerd[6746]: time="2025-10-06T14:52:18.488644772Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 06 14:52:40 functional-933184 dockerd[6746]: time="2025-10-06T14:52:40.520546528Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6d9ba03b8863c       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8   4 minutes ago       Running             nginx                     0                   b86fa3d466cfc       nginx-svc                                   default
	67fbd4a16067e       ba04bb24b9575                                                                   4 minutes ago       Running             storage-provisioner       4                   154edd9f915d2       storage-provisioner                         kube-system
	2e9d6beb6e280       ba04bb24b9575                                                                   4 minutes ago       Exited              storage-provisioner       3                   154edd9f915d2       storage-provisioner                         kube-system
	156725d59efd7       05baa95f5142d                                                                   4 minutes ago       Running             kube-proxy                3                   21b4db5f04536       kube-proxy-zdgg7                            kube-system
	bd006eabbe87b       138784d87c9c5                                                                   4 minutes ago       Running             coredns                   2                   a3bbd10247337       coredns-66bc5c9577-9mq5b                    kube-system
	6fcdf6f551c14       43911e833d64d                                                                   4 minutes ago       Running             kube-apiserver            0                   89e3edb922f15       kube-apiserver-functional-933184            kube-system
	8e509ed52ab67       b5f57ec6b9867                                                                   4 minutes ago       Running             kube-scheduler            3                   feb368f89cc4e       kube-scheduler-functional-933184            kube-system
	bcb39dc782d61       7eb2c6ff0c5a7                                                                   4 minutes ago       Running             kube-controller-manager   3                   c226e4161bd80       kube-controller-manager-functional-933184   kube-system
	ab99eb78d7130       a1894772a478e                                                                   4 minutes ago       Running             etcd                      2                   ecf4d7659f06c       etcd-functional-933184                      kube-system
	f0543848bada1       b5f57ec6b9867                                                                   4 minutes ago       Exited              kube-scheduler            2                   47f1155aea2db       kube-scheduler-functional-933184            kube-system
	dbf089d9c53ce       05baa95f5142d                                                                   4 minutes ago       Exited              kube-proxy                2                   d73913b2b29fc       kube-proxy-zdgg7                            kube-system
	550fc01c34458       7eb2c6ff0c5a7                                                                   4 minutes ago       Exited              kube-controller-manager   2                   ec8bd41a3bb5b       kube-controller-manager-functional-933184   kube-system
	427dc6f962780       138784d87c9c5                                                                   5 minutes ago       Exited              coredns                   1                   ac7201a1c2c4f       coredns-66bc5c9577-9mq5b                    kube-system
	402bdb9bee67e       a1894772a478e                                                                   5 minutes ago       Exited              etcd                      1                   bf3ae8b955f98       etcd-functional-933184                      kube-system
	
	
	==> coredns [427dc6f96278] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38471 - 40348 "HINFO IN 1928811608007205393.1693971363683984954. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014901569s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bd006eabbe87] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43853 - 59172 "HINFO IN 6573570928990532390.5180121269576993269. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013433314s
	
	
	==> describe nodes <==
	Name:               functional-933184
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-933184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=512f8c40caa35ce8d82a76bc06907e2d11c89c81
	                    minikube.k8s.io/name=functional-933184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_06T14_46_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 06 Oct 2025 14:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-933184
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 06 Oct 2025 14:53:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 06 Oct 2025 14:52:57 +0000   Mon, 06 Oct 2025 14:46:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 06 Oct 2025 14:52:57 +0000   Mon, 06 Oct 2025 14:46:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 06 Oct 2025 14:52:57 +0000   Mon, 06 Oct 2025 14:46:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 06 Oct 2025 14:52:57 +0000   Mon, 06 Oct 2025 14:46:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-933184
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9ab0fdbff02456391dde75296bb36e5
	  System UUID:                9a0c63bd-fa52-4df3-ab5b-d64d258d24eb
	  Boot ID:                    2fc2fcec-a145-448c-8b5d-9e614a6ff2df
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-7d85dfc575-8vhg5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-66bc5c9577-9mq5b                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m54s
	  kube-system                 etcd-functional-933184                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m59s
	  kube-system                 kube-apiserver-functional-933184             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-controller-manager-functional-933184    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m59s
	  kube-system                 kube-proxy-zdgg7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m54s
	  kube-system                 kube-scheduler-functional-933184             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m59s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m52s                  kube-proxy       
	  Normal   Starting                 4m31s                  kube-proxy       
	  Normal   Starting                 5m36s                  kube-proxy       
	  Normal   Starting                 7m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m8s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    7m7s (x8 over 7m7s)    kubelet          Node functional-933184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m7s (x8 over 7m7s)    kubelet          Node functional-933184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     7m7s (x7 over 7m7s)    kubelet          Node functional-933184 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeAllocatableEnforced  6m59s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 6m59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m59s                  kubelet          Node functional-933184 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m59s                  kubelet          Node functional-933184 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m59s                  kubelet          Node functional-933184 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m59s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           6m55s                  node-controller  Node functional-933184 event: Registered Node functional-933184 in Controller
	  Normal   NodeReady                6m54s                  kubelet          Node functional-933184 status is now: NodeReady
	  Normal   RegisteredNode           5m34s                  node-controller  Node functional-933184 event: Registered Node functional-933184 in Controller
	  Warning  ContainerGCFailed        4m59s (x2 over 5m59s)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   NodeHasNoDiskPressure    4m39s (x8 over 4m39s)  kubelet          Node functional-933184 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 4m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m39s (x8 over 4m39s)  kubelet          Node functional-933184 status is now: NodeHasSufficientMemory
	  Normal   Starting                 4m39s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     4m39s (x7 over 4m39s)  kubelet          Node functional-933184 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m30s                  node-controller  Node functional-933184 event: Registered Node functional-933184 in Controller
	
	
	==> dmesg <==
	[Oct 6 12:59] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 13:11] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 6 14:19] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [402bdb9bee67] <==
	{"level":"warn","ts":"2025-10-06T14:47:57.087762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.109164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.128203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.162961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.174001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.196398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:47:57.300785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58190","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-06T14:48:37.808552Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-06T14:48:37.808609Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-933184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-06T14:48:37.808717Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T14:48:44.815049Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-06T14:48:44.815130Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T14:48:44.815150Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-06T14:48:44.815235Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-06T14:48:44.815247Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-06T14:48:44.817595Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T14:48:44.817718Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T14:48:44.817751Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-06T14:48:44.817963Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-06T14:48:44.818014Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-06T14:48:44.818047Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T14:48:44.822615Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-06T14:48:44.822718Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-06T14:48:44.822982Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-06T14:48:44.823005Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-933184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ab99eb78d713] <==
	{"level":"warn","ts":"2025-10-06T14:49:00.269059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.331964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.359357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.396546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.422633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.466690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.506841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.532156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.559195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.584785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.615586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.650951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.680894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.705590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.745295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.767876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.793778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.850829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.901664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.924749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.954731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:00.988428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:01.014783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:01.047809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-06T14:49:01.169500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45350","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:53:35 up 21:36,  0 user,  load average: 0.23, 0.75, 1.21
	Linux functional-933184 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [6fcdf6f551c1] <==
	I1006 14:49:02.308732       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1006 14:49:02.308996       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1006 14:49:02.309582       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1006 14:49:02.325991       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1006 14:49:02.326044       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1006 14:49:02.326090       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1006 14:49:02.326339       1 aggregator.go:171] initial CRD sync complete...
	I1006 14:49:02.326353       1 autoregister_controller.go:144] Starting autoregister controller
	I1006 14:49:02.326360       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1006 14:49:02.326366       1 cache.go:39] Caches are synced for autoregister controller
	I1006 14:49:02.345527       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1006 14:49:02.365629       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1006 14:49:02.405179       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1006 14:49:02.916474       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1006 14:49:03.329580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1006 14:49:03.331065       1 controller.go:667] quota admission added evaluator for: endpoints
	I1006 14:49:03.337228       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1006 14:49:03.951315       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1006 14:49:03.989764       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1006 14:49:04.028675       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1006 14:49:04.038814       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1006 14:49:05.810549       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1006 14:49:19.891278       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.103.228.43"}
	I1006 14:49:27.023450       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.166.75"}
	I1006 14:49:36.614963       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.97.222"}
	
	
	==> kube-controller-manager [550fc01c3445] <==
	
	
	==> kube-controller-manager [bcb39dc782d6] <==
	I1006 14:49:05.469490       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1006 14:49:05.470749       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1006 14:49:05.471955       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1006 14:49:05.476205       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1006 14:49:05.481848       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1006 14:49:05.482922       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1006 14:49:05.484280       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1006 14:49:05.488298       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1006 14:49:05.490838       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1006 14:49:05.491078       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1006 14:49:05.496597       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1006 14:49:05.499335       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1006 14:49:05.501471       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1006 14:49:05.501811       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1006 14:49:05.502067       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1006 14:49:05.502221       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1006 14:49:05.502441       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1006 14:49:05.504112       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1006 14:49:05.504431       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1006 14:49:05.504694       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1006 14:49:05.504891       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1006 14:49:05.505339       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1006 14:49:05.511446       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1006 14:49:05.514540       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1006 14:49:05.518161       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [156725d59efd] <==
	I1006 14:49:03.079329       1 server_linux.go:53] "Using iptables proxy"
	I1006 14:49:03.275374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1006 14:49:03.380514       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1006 14:49:03.380553       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1006 14:49:03.380667       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1006 14:49:03.400464       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1006 14:49:03.400701       1 server_linux.go:132] "Using iptables Proxier"
	I1006 14:49:03.404946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1006 14:49:03.405244       1 server.go:527] "Version info" version="v1.34.1"
	I1006 14:49:03.405267       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:49:03.406398       1 config.go:200] "Starting service config controller"
	I1006 14:49:03.406417       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1006 14:49:03.415411       1 config.go:309] "Starting node config controller"
	I1006 14:49:03.415431       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1006 14:49:03.415440       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1006 14:49:03.415867       1 config.go:106] "Starting endpoint slice config controller"
	I1006 14:49:03.415885       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1006 14:49:03.415900       1 config.go:403] "Starting serviceCIDR config controller"
	I1006 14:49:03.415904       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1006 14:49:03.506935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1006 14:49:03.516366       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1006 14:49:03.516382       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dbf089d9c53c] <==
	
	
	==> kube-scheduler [8e509ed52ab6] <==
	I1006 14:49:01.155166       1 serving.go:386] Generated self-signed cert in-memory
	I1006 14:49:02.830831       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1006 14:49:02.830869       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1006 14:49:02.837907       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1006 14:49:02.837965       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1006 14:49:02.838019       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:49:02.838034       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1006 14:49:02.838050       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 14:49:02.838062       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 14:49:02.838500       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1006 14:49:02.838651       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1006 14:49:02.938942       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1006 14:49:02.939304       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1006 14:49:02.939325       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f0543848bada] <==
	
	
	==> kubelet <==
	Oct 06 14:51:44 functional-933184 kubelet[9144]: E1006 14:51:44.282713    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:51:48 functional-933184 kubelet[9144]: E1006 14:51:48.283354    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:51:56 functional-933184 kubelet[9144]: E1006 14:51:56.283059    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:52:02 functional-933184 kubelet[9144]: E1006 14:52:02.282440    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:52:07 functional-933184 kubelet[9144]: E1006 14:52:07.283122    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:52:14 functional-933184 kubelet[9144]: E1006 14:52:14.283225    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:52:18 functional-933184 kubelet[9144]: E1006 14:52:18.492770    9144 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 06 14:52:18 functional-933184 kubelet[9144]: E1006 14:52:18.492830    9144 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 06 14:52:18 functional-933184 kubelet[9144]: E1006 14:52:18.492914    9144 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(946eca8a-de0f-49f0-9a33-e2841725c94c): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 06 14:52:18 functional-933184 kubelet[9144]: E1006 14:52:18.492966    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:52:27 functional-933184 kubelet[9144]: E1006 14:52:27.282900    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:52:32 functional-933184 kubelet[9144]: E1006 14:52:32.282997    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:52:40 functional-933184 kubelet[9144]: E1006 14:52:40.523361    9144 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 06 14:52:40 functional-933184 kubelet[9144]: E1006 14:52:40.523413    9144 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 06 14:52:40 functional-933184 kubelet[9144]: E1006 14:52:40.523483    9144 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-8vhg5_default(afbbfa0f-3a47-4314-8241-153b7c527e2f): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 06 14:52:40 functional-933184 kubelet[9144]: E1006 14:52:40.523518    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:52:44 functional-933184 kubelet[9144]: E1006 14:52:44.282950    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:52:51 functional-933184 kubelet[9144]: E1006 14:52:51.282510    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:52:55 functional-933184 kubelet[9144]: E1006 14:52:55.282593    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:53:04 functional-933184 kubelet[9144]: E1006 14:53:04.283093    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:53:07 functional-933184 kubelet[9144]: E1006 14:53:07.282531    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:53:18 functional-933184 kubelet[9144]: E1006 14:53:18.282647    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:53:19 functional-933184 kubelet[9144]: E1006 14:53:19.282770    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	Oct 06 14:53:30 functional-933184 kubelet[9144]: E1006 14:53:30.283281    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="946eca8a-de0f-49f0-9a33-e2841725c94c"
	Oct 06 14:53:32 functional-933184 kubelet[9144]: E1006 14:53:32.282936    9144 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-8vhg5" podUID="afbbfa0f-3a47-4314-8241-153b7c527e2f"
	
	
	==> storage-provisioner [2e9d6beb6e28] <==
	I1006 14:49:02.934187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1006 14:49:02.938570       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [67fbd4a16067] <==
	W1006 14:53:09.876698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:11.880105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:11.886789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:13.889621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:13.894240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:15.897555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:15.902156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:17.905520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:17.910115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:19.913005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:19.917506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:21.920526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:21.927454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:23.930440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:23.935441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:25.938854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:25.945775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:27.949892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:27.954702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:29.958072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:29.964855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:31.968786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:31.973632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:33.976453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1006 14:53:33.987475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-933184 -n functional-933184
helpers_test.go:269: (dbg) Run:  kubectl --context functional-933184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-connect-7d85dfc575-8vhg5 sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-933184 describe pod hello-node-connect-7d85dfc575-8vhg5 sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-933184 describe pod hello-node-connect-7d85dfc575-8vhg5 sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-connect-7d85dfc575-8vhg5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933184/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:49:36 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8hlxw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8hlxw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m59s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-8vhg5 to functional-933184
	  Warning  Failed     2m28s                kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    56s (x5 over 3m59s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     56s (x4 over 3m59s)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     56s (x5 over 3m59s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x15 over 3m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4s (x15 over 3m59s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-933184/192.168.49.2
	Start Time:       Mon, 06 Oct 2025 14:49:33 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p4dbq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-p4dbq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-933184
	  Warning  Failed     4m2s                 kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    78s (x5 over 4m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     78s (x5 over 4m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     78s (x4 over 3m50s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     18s (x15 over 4m2s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    6s (x16 over 4m2s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (249.24s)

                                                
                                    

Test pass (314/347)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.8
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 3.87
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.17
18 TestDownloadOnly/v1.34.1/DeleteAll 0.35
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.24
21 TestBinaryMirror 0.62
22 TestOffline 85.54
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 174
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.91
35 TestAddons/parallel/Registry 17.53
36 TestAddons/parallel/RegistryCreds 0.69
38 TestAddons/parallel/InspektorGadget 6.25
39 TestAddons/parallel/MetricsServer 5.77
42 TestAddons/parallel/Headlamp 18.82
43 TestAddons/parallel/CloudSpanner 6.58
45 TestAddons/parallel/NvidiaDevicePlugin 6.46
46 TestAddons/parallel/Yakd 11.78
48 TestAddons/StoppedEnableDisable 11.39
49 TestCertOptions 42.11
50 TestCertExpiration 274.63
51 TestDockerFlags 47.86
52 TestForceSystemdFlag 51.64
53 TestForceSystemdEnv 52.53
59 TestErrorSpam/setup 32.16
60 TestErrorSpam/start 0.78
61 TestErrorSpam/status 1.09
62 TestErrorSpam/pause 1.65
63 TestErrorSpam/unpause 1.64
64 TestErrorSpam/stop 2.88
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 79.21
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 50.5
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.96
76 TestFunctional/serial/CacheCmd/cache/add_local 1.03
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 58.27
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.21
87 TestFunctional/serial/LogsFileCmd 1.22
88 TestFunctional/serial/InvalidService 5.01
90 TestFunctional/parallel/ConfigCmd 0.56
92 TestFunctional/parallel/DryRun 0.55
93 TestFunctional/parallel/InternationalLanguage 0.2
94 TestFunctional/parallel/StatusCmd 1.19
99 TestFunctional/parallel/AddonsCmd 0.15
102 TestFunctional/parallel/SSHCmd 0.7
103 TestFunctional/parallel/CpCmd 2.41
105 TestFunctional/parallel/FileSync 0.92
106 TestFunctional/parallel/CertSync 2.6
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.3
114 TestFunctional/parallel/License 0.27
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 364.24
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
128 TestFunctional/parallel/ProfileCmd/profile_list 0.45
129 TestFunctional/parallel/ServiceCmd/List 0.6
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
131 TestFunctional/parallel/MountCmd/any-port 9.3
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.65
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
134 TestFunctional/parallel/ServiceCmd/Format 0.51
135 TestFunctional/parallel/ServiceCmd/URL 0.49
136 TestFunctional/parallel/MountCmd/specific-port 1.15
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.33
138 TestFunctional/parallel/Version/short 0.06
139 TestFunctional/parallel/Version/components 1.05
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.65
145 TestFunctional/parallel/ImageCommands/Setup 0.64
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.95
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.06
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
156 TestFunctional/parallel/DockerEnv/bash 1.06
157 TestFunctional/delete_echo-server_images 0.04
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/StartCluster 168.13
165 TestMultiControlPlane/serial/DeployApp 9.1
166 TestMultiControlPlane/serial/PingHostFromPods 1.77
167 TestMultiControlPlane/serial/AddWorkerNode 37.29
168 TestMultiControlPlane/serial/NodeLabels 0.13
169 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
170 TestMultiControlPlane/serial/CopyFile 20.76
171 TestMultiControlPlane/serial/StopSecondaryNode 11.91
172 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.09
173 TestMultiControlPlane/serial/RestartSecondaryNode 47.32
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.21
175 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.88
176 TestMultiControlPlane/serial/DeleteSecondaryNode 11.92
177 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
178 TestMultiControlPlane/serial/StopCluster 32.75
179 TestMultiControlPlane/serial/RestartCluster 119.14
180 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
181 TestMultiControlPlane/serial/AddSecondaryNode 91.98
182 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.13
185 TestImageBuild/serial/Setup 36.67
186 TestImageBuild/serial/NormalBuild 1.67
187 TestImageBuild/serial/BuildWithBuildArg 0.97
188 TestImageBuild/serial/BuildWithDockerIgnore 0.69
189 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.91
193 TestJSONOutput/start/Command 78.14
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.64
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.62
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 10.99
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.25
218 TestKicCustomNetwork/create_custom_network 37.75
219 TestKicCustomNetwork/use_default_bridge_network 39.81
220 TestKicExistingNetwork 33.75
221 TestKicCustomSubnet 34.64
222 TestKicStaticIP 37.68
223 TestMainNoArgs 0.05
224 TestMinikubeProfile 74.71
227 TestMountStart/serial/StartWithMountFirst 8.44
228 TestMountStart/serial/VerifyMountFirst 0.28
229 TestMountStart/serial/StartWithMountSecond 11.13
230 TestMountStart/serial/VerifyMountSecond 0.27
231 TestMountStart/serial/DeleteFirst 1.48
232 TestMountStart/serial/VerifyMountPostDelete 0.26
233 TestMountStart/serial/Stop 1.2
234 TestMountStart/serial/RestartStopped 8.67
235 TestMountStart/serial/VerifyMountPostStop 0.27
238 TestMultiNode/serial/FreshStart2Nodes 93.24
239 TestMultiNode/serial/DeployApp2Nodes 6.8
240 TestMultiNode/serial/PingHostFrom2Pods 1.28
241 TestMultiNode/serial/AddNode 35.15
242 TestMultiNode/serial/MultiNodeLabels 0.09
243 TestMultiNode/serial/ProfileList 0.74
244 TestMultiNode/serial/CopyFile 10.72
245 TestMultiNode/serial/StopNode 2.28
246 TestMultiNode/serial/StartAfterStop 9.35
247 TestMultiNode/serial/RestartKeepsNodes 74.56
248 TestMultiNode/serial/DeleteNode 5.74
249 TestMultiNode/serial/StopMultiNode 21.68
250 TestMultiNode/serial/RestartMultiNode 56.64
251 TestMultiNode/serial/ValidateNameConflict 38.39
256 TestPreload 175.02
258 TestScheduledStopUnix 109.52
259 TestSkaffold 146.24
261 TestInsufficientStorage 14.17
262 TestRunningBinaryUpgrade 82.13
264 TestKubernetesUpgrade 137.6
265 TestMissingContainerUpgrade 89.87
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.17
268 TestNoKubernetes/serial/StartWithK8s 48
269 TestNoKubernetes/serial/StartWithStopK8s 18.79
270 TestNoKubernetes/serial/Start 10.39
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
272 TestNoKubernetes/serial/ProfileList 1.11
273 TestNoKubernetes/serial/Stop 1.23
274 TestNoKubernetes/serial/StartNoArgs 8.33
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
287 TestStoppedBinaryUpgrade/Setup 1.17
288 TestStoppedBinaryUpgrade/Upgrade 79.03
289 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
298 TestPause/serial/Start 82
299 TestNetworkPlugins/group/auto/Start 80.29
300 TestPause/serial/SecondStartNoReconfiguration 52.29
301 TestNetworkPlugins/group/auto/KubeletFlags 0.34
302 TestNetworkPlugins/group/auto/NetCatPod 9.29
303 TestNetworkPlugins/group/auto/DNS 0.31
304 TestPause/serial/Pause 0.96
305 TestNetworkPlugins/group/auto/Localhost 0.25
306 TestNetworkPlugins/group/auto/HairPin 0.22
307 TestPause/serial/VerifyStatus 0.35
308 TestPause/serial/Unpause 0.61
309 TestPause/serial/PauseAgain 0.81
310 TestPause/serial/DeletePaused 2.23
311 TestPause/serial/VerifyDeletedResources 0.43
312 TestNetworkPlugins/group/kindnet/Start 68.7
313 TestNetworkPlugins/group/calico/Start 69.59
314 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
315 TestNetworkPlugins/group/kindnet/KubeletFlags 0.47
316 TestNetworkPlugins/group/kindnet/NetCatPod 11.38
317 TestNetworkPlugins/group/kindnet/DNS 0.31
318 TestNetworkPlugins/group/kindnet/Localhost 0.32
319 TestNetworkPlugins/group/kindnet/HairPin 0.25
320 TestNetworkPlugins/group/calico/ControllerPod 5.03
321 TestNetworkPlugins/group/calico/KubeletFlags 0.45
322 TestNetworkPlugins/group/calico/NetCatPod 12.64
323 TestNetworkPlugins/group/calico/DNS 0.28
324 TestNetworkPlugins/group/calico/Localhost 0.33
325 TestNetworkPlugins/group/calico/HairPin 0.31
326 TestNetworkPlugins/group/custom-flannel/Start 60.77
327 TestNetworkPlugins/group/false/Start 85.71
328 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
329 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.5
330 TestNetworkPlugins/group/custom-flannel/DNS 0.29
331 TestNetworkPlugins/group/custom-flannel/Localhost 0.29
332 TestNetworkPlugins/group/custom-flannel/HairPin 0.3
333 TestNetworkPlugins/group/enable-default-cni/Start 78.89
334 TestNetworkPlugins/group/false/KubeletFlags 0.44
335 TestNetworkPlugins/group/false/NetCatPod 11.33
336 TestNetworkPlugins/group/false/DNS 0.28
337 TestNetworkPlugins/group/false/Localhost 0.23
338 TestNetworkPlugins/group/false/HairPin 0.26
339 TestNetworkPlugins/group/flannel/Start 53.81
340 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
341 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.46
342 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
343 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
344 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
345 TestNetworkPlugins/group/flannel/ControllerPod 6.01
346 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
347 TestNetworkPlugins/group/flannel/NetCatPod 11.46
348 TestNetworkPlugins/group/bridge/Start 81.37
349 TestNetworkPlugins/group/flannel/DNS 0.24
350 TestNetworkPlugins/group/flannel/Localhost 0.19
351 TestNetworkPlugins/group/flannel/HairPin 0.22
352 TestNetworkPlugins/group/kubenet/Start 74.81
353 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
354 TestNetworkPlugins/group/bridge/NetCatPod 10.28
355 TestNetworkPlugins/group/bridge/DNS 0.18
356 TestNetworkPlugins/group/bridge/Localhost 0.16
357 TestNetworkPlugins/group/bridge/HairPin 0.17
358 TestNetworkPlugins/group/kubenet/KubeletFlags 0.31
359 TestNetworkPlugins/group/kubenet/NetCatPod 13.36
361 TestStartStop/group/old-k8s-version/serial/FirstStart 94.9
362 TestNetworkPlugins/group/kubenet/DNS 0.24
363 TestNetworkPlugins/group/kubenet/Localhost 0.19
364 TestNetworkPlugins/group/kubenet/HairPin 0.22
366 TestStartStop/group/no-preload/serial/FirstStart 92.64
367 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
369 TestStartStop/group/old-k8s-version/serial/Stop 11.1
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
371 TestStartStop/group/old-k8s-version/serial/SecondStart 60.01
372 TestStartStop/group/no-preload/serial/DeployApp 11.65
373 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
374 TestStartStop/group/no-preload/serial/Stop 11.14
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
376 TestStartStop/group/no-preload/serial/SecondStart 52.12
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
379 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
380 TestStartStop/group/old-k8s-version/serial/Pause 3.06
382 TestStartStop/group/embed-certs/serial/FirstStart 88.88
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.13
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
386 TestStartStop/group/no-preload/serial/Pause 3.98
388 TestStartStop/group/newest-cni/serial/FirstStart 45.83
389 TestStartStop/group/newest-cni/serial/DeployApp 0
390 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.21
391 TestStartStop/group/newest-cni/serial/Stop 9.24
392 TestStartStop/group/embed-certs/serial/DeployApp 11.49
393 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
394 TestStartStop/group/newest-cni/serial/SecondStart 20.08
395 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.77
396 TestStartStop/group/embed-certs/serial/Stop 11.82
397 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
398 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
400 TestStartStop/group/newest-cni/serial/Pause 3.4
401 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.44
402 TestStartStop/group/embed-certs/serial/SecondStart 57.98
404 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.55
405 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
406 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.6
407 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
408 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
409 TestStartStop/group/embed-certs/serial/Pause 3.29
410 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.94
411 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.36
412 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
413 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.17
414 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
415 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
416 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
417 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.98
x
+
TestDownloadOnly/v1.28.0/json-events (5.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-379615 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-379615 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.797759479s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1006 14:20:27.040662  805351 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1006 14:20:27.040756  805351 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-379615
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-379615: exit status 85 (99.003289ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-379615 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-379615 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:20:21
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:20:21.286080  805357 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:20:21.286213  805357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:21.286224  805357 out.go:374] Setting ErrFile to fd 2...
	I1006 14:20:21.286229  805357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:21.286493  805357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	W1006 14:20:21.286631  805357 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21701-803497/.minikube/config/config.json: open /home/jenkins/minikube-integration/21701-803497/.minikube/config/config.json: no such file or directory
	I1006 14:20:21.287029  805357 out.go:368] Setting JSON to true
	I1006 14:20:21.287928  805357 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":75773,"bootTime":1759684648,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1006 14:20:21.287993  805357 start.go:140] virtualization:  
	I1006 14:20:21.292175  805357 out.go:99] [download-only-379615] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1006 14:20:21.292355  805357 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball: no such file or directory
	I1006 14:20:21.292411  805357 notify.go:220] Checking for updates...
	I1006 14:20:21.295352  805357 out.go:171] MINIKUBE_LOCATION=21701
	I1006 14:20:21.298407  805357 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:20:21.301408  805357 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:20:21.304292  805357 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	I1006 14:20:21.307187  805357 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1006 14:20:21.312818  805357 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 14:20:21.313143  805357 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:20:21.334648  805357 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 14:20:21.334773  805357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:21.396546  805357 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-06 14:20:21.387598322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:21.396658  805357 docker.go:318] overlay module found
	I1006 14:20:21.399602  805357 out.go:99] Using the docker driver based on user configuration
	I1006 14:20:21.399636  805357 start.go:304] selected driver: docker
	I1006 14:20:21.399647  805357 start.go:924] validating driver "docker" against <nil>
	I1006 14:20:21.399885  805357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:21.453748  805357 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-06 14:20:21.443988105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:21.453911  805357 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:20:21.454215  805357 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1006 14:20:21.454380  805357 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 14:20:21.457512  805357 out.go:171] Using Docker driver with root privileges
	I1006 14:20:21.460453  805357 cni.go:84] Creating CNI manager for ""
	I1006 14:20:21.460538  805357 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1006 14:20:21.460555  805357 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1006 14:20:21.460645  805357 start.go:348] cluster config:
	{Name:download-only-379615 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-379615 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:20:21.463643  805357 out.go:99] Starting "download-only-379615" primary control-plane node in "download-only-379615" cluster
	I1006 14:20:21.463694  805357 cache.go:123] Beginning downloading kic base image for docker with docker
	I1006 14:20:21.466713  805357 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1006 14:20:21.466747  805357 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1006 14:20:21.466812  805357 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1006 14:20:21.482414  805357 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 14:20:21.482592  805357 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1006 14:20:21.482683  805357 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1006 14:20:21.523720  805357 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1006 14:20:21.523756  805357 cache.go:58] Caching tarball of preloaded images
	I1006 14:20:21.524567  805357 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1006 14:20:21.527872  805357 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1006 14:20:21.527894  805357 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 from gcs api...
	I1006 14:20:21.623314  805357 preload.go:290] Got checksum from GCS API "002a73d62a3b066a08573cf3da2c8cb4"
	I1006 14:20:21.623441  805357 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1006 14:20:24.506353  805357 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1006 14:20:24.506836  805357 profile.go:143] Saving config to /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/download-only-379615/config.json ...
	I1006 14:20:24.506909  805357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/download-only-379615/config.json: {Name:mkbbce21d704714d5b5770d9609c3103c42b6a99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1006 14:20:24.507109  805357 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1006 14:20:24.507365  805357 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21701-803497/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-379615 host does not exist
	  To start a cluster, run: "minikube start -p download-only-379615"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-379615
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-023239 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-023239 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.869412557s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1006 14:20:31.382748  805351 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1006 14:20:31.382791  805351 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21701-803497/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-023239
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-023239: exit status 85 (165.858172ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-379615 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-379615 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ delete  │ -p download-only-379615                                                                                                                                                       │ download-only-379615 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │ 06 Oct 25 14:20 UTC │
	│ start   │ -o=json --download-only -p download-only-023239 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-023239 │ jenkins │ v1.37.0 │ 06 Oct 25 14:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/06 14:20:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1006 14:20:27.563018  805556 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:20:27.563208  805556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:27.563228  805556 out.go:374] Setting ErrFile to fd 2...
	I1006 14:20:27.563248  805556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:20:27.563634  805556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 14:20:27.564227  805556 out.go:368] Setting JSON to true
	I1006 14:20:27.565468  805556 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":75779,"bootTime":1759684648,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1006 14:20:27.565615  805556 start.go:140] virtualization:  
	I1006 14:20:27.569103  805556 out.go:99] [download-only-023239] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 14:20:27.569521  805556 notify.go:220] Checking for updates...
	I1006 14:20:27.572462  805556 out.go:171] MINIKUBE_LOCATION=21701
	I1006 14:20:27.575634  805556 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:20:27.578677  805556 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:20:27.581698  805556 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	I1006 14:20:27.584855  805556 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1006 14:20:27.590770  805556 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1006 14:20:27.591060  805556 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:20:27.625358  805556 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 14:20:27.625484  805556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:27.685650  805556 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-06 14:20:27.676546984 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:27.685762  805556 docker.go:318] overlay module found
	I1006 14:20:27.688958  805556 out.go:99] Using the docker driver based on user configuration
	I1006 14:20:27.689008  805556 start.go:304] selected driver: docker
	I1006 14:20:27.689016  805556 start.go:924] validating driver "docker" against <nil>
	I1006 14:20:27.689142  805556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:20:27.741849  805556 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-06 14:20:27.732789021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:20:27.742007  805556 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1006 14:20:27.742283  805556 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1006 14:20:27.742439  805556 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1006 14:20:27.745495  805556 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-023239 host does not exist
	  To start a cluster, run: "minikube start -p download-only-023239"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-023239
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1006 14:20:33.193315  805351 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-859483 --alsologtostderr --binary-mirror http://127.0.0.1:42473 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-859483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-859483
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (85.54s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-288066 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-288066 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m23.247548053s)
helpers_test.go:175: Cleaning up "offline-docker-288066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-288066
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-288066: (2.287258369s)
--- PASS: TestOffline (85.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-006450
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-006450: exit status 85 (76.157191ms)

                                                
                                                
-- stdout --
	* Profile "addons-006450" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-006450"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-006450
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-006450: exit status 85 (75.225911ms)

                                                
                                                
-- stdout --
	* Profile "addons-006450" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-006450"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (174s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-006450 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-006450 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m53.998232871s)
--- PASS: TestAddons/Setup (174.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-006450 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-006450 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-006450 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-006450 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4944c9e9-6ad0-47b4-8870-2e20bea63255] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4944c9e9-6ad0-47b4-8870-2e20bea63255] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003577823s
addons_test.go:694: (dbg) Run:  kubectl --context addons-006450 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-006450 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-006450 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-006450 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.208162ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-btgr2" [90bfa3d6-9f89-4227-b3ef-d98d9fadd197] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004237432s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-wd7b6" [638a84e3-6fae-4413-aa77-31014a85ff29] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004754105s
addons_test.go:392: (dbg) Run:  kubectl --context addons-006450 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-006450 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-006450 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.474417317s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 ip
2025/10/06 14:32:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.53s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.505057ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-006450
addons_test.go:332: (dbg) Run:  kubectl --context addons-006450 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-mwfpm" [29fea0d4-524b-4491-9729-13340bdc8098] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003629429s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.305036ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-s77t8" [4560d030-72e2-4fed-b2fb-5a3edfe4178c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006864802s
addons_test.go:463: (dbg) Run:  kubectl --context addons-006450 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-006450 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-pvcw6" [17e30643-0d51-4f2a-a8f8-ea689b0ce9fa] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-pvcw6" [17e30643-0d51-4f2a-a8f8-ea689b0ce9fa] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-pvcw6" [17e30643-0d51-4f2a-a8f8-ea689b0ce9fa] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003803728s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 addons disable headlamp --alsologtostderr -v=1: (5.921730779s)
--- PASS: TestAddons/parallel/Headlamp (18.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-zjsh8" [2e748800-af4f-4933-abfb-819354e0821c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004249443s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-d29s2" [0c163bb6-be86-4968-b8c1-96839618f3ac] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003331973s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-nfj9q" [7f649cd2-60fe-4bb2-abab-f044f74b94e9] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004041458s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-006450 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-006450 addons disable yakd --alsologtostderr -v=1: (5.772134106s)
--- PASS: TestAddons/parallel/Yakd (11.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-006450
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-006450: (11.097693449s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-006450
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-006450
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-006450
--- PASS: TestAddons/StoppedEnableDisable (11.39s)

                                                
                                    
x
+
TestCertOptions (42.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-320375 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E1006 15:42:29.635831  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-320375 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (39.08044695s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-320375 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-320375 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-320375 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-320375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-320375
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-320375: (2.288498905s)
--- PASS: TestCertOptions (42.11s)

                                                
                                    
x
+
TestCertExpiration (274.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-976999 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-976999 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (46.93770372s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-976999 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-976999 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (45.406562804s)
helpers_test.go:175: Cleaning up "cert-expiration-976999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-976999
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-976999: (2.283207569s)
--- PASS: TestCertExpiration (274.63s)

                                                
                                    
x
+
TestDockerFlags (47.86s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-566618 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-566618 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (44.505283277s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-566618 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-566618 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-566618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-566618
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-566618: (2.392104613s)
--- PASS: TestDockerFlags (47.86s)

                                                
                                    
x
+
TestForceSystemdFlag (51.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-193481 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-193481 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (48.261169476s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-193481 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-193481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-193481
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-193481: (2.451133466s)
--- PASS: TestForceSystemdFlag (51.64s)

                                                
                                    
x
+
TestForceSystemdEnv (52.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-170241 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-170241 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (49.076942975s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-170241 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-170241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-170241
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-170241: (2.535168149s)
--- PASS: TestForceSystemdEnv (52.53s)

                                                
                                    
x
+
TestErrorSpam/setup (32.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-366424 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-366424 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-366424 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-366424 --driver=docker  --container-runtime=docker: (32.158506896s)
--- PASS: TestErrorSpam/setup (32.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (2.88s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 stop: (2.664830766s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-366424 --log_dir /tmp/nospam-366424 stop
--- PASS: TestErrorSpam/stop (2.88s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21701-803497/.minikube/files/etc/test/nested/copy/805351/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-933184 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-933184 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m19.204060702s)
--- PASS: TestFunctional/serial/StartWithProxy (79.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1006 14:47:21.721441  805351 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-933184 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-933184 --alsologtostderr -v=8: (50.499716182s)
functional_test.go:678: soft start took 50.501406603s for "functional-933184" cluster.
I1006 14:48:12.221632  805351 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (50.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-933184 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-933184 cache add registry.k8s.io/pause:3.3: (1.085149394s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-933184 /tmp/TestFunctionalserialCacheCmdcacheadd_local1858301594/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 cache add minikube-local-cache-test:functional-933184
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 cache delete minikube-local-cache-test:functional-933184
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-933184
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-933184 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.664724ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 kubectl -- --context functional-933184 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-933184 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-933184 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1006 14:48:27.935079  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:27.941690  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:27.953408  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:27.974856  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:28.016476  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:28.098085  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:28.259641  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:28.581237  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:29.223266  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:30.504846  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:33.067697  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:38.189225  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:48:48.431461  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:49:08.912828  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-933184 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.270078783s)
functional_test.go:776: restart took 58.270184766s for "functional-933184" cluster.
I1006 14:49:17.103540  805351 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (58.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-933184 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-933184 logs: (1.211118033s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 logs --file /tmp/TestFunctionalserialLogsFileCmd3266047087/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-933184 logs --file /tmp/TestFunctionalserialLogsFileCmd3266047087/001/logs.txt: (1.223584859s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-933184 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-933184
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-933184: exit status 115 (736.765592ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31297 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-933184 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-933184 delete -f testdata/invalidsvc.yaml: (1.011109753s)
--- PASS: TestFunctional/serial/InvalidService (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-933184 config get cpus: exit status 14 (120.437189ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-933184 config get cpus: exit status 14 (98.958428ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-933184 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-933184 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (184.243341ms)

                                                
                                                
-- stdout --
	* [functional-933184] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:59:44.671078  864470 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:59:44.671374  864470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:59:44.671401  864470 out.go:374] Setting ErrFile to fd 2...
	I1006 14:59:44.671419  864470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:59:44.671793  864470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 14:59:44.672237  864470 out.go:368] Setting JSON to false
	I1006 14:59:44.673336  864470 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":78137,"bootTime":1759684648,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1006 14:59:44.673437  864470 start.go:140] virtualization:  
	I1006 14:59:44.676692  864470 out.go:179] * [functional-933184] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1006 14:59:44.679860  864470 notify.go:220] Checking for updates...
	I1006 14:59:44.679828  864470 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:59:44.683484  864470 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:59:44.686458  864470 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:59:44.689850  864470 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	I1006 14:59:44.692845  864470 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 14:59:44.695647  864470 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:59:44.698936  864470 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:59:44.699516  864470 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:59:44.720077  864470 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 14:59:44.720218  864470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:59:44.777784  864470 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 14:59:44.768479696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:59:44.777898  864470 docker.go:318] overlay module found
	I1006 14:59:44.780975  864470 out.go:179] * Using the docker driver based on existing profile
	I1006 14:59:44.783843  864470 start.go:304] selected driver: docker
	I1006 14:59:44.783861  864470 start.go:924] validating driver "docker" against &{Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:59:44.783967  864470 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:59:44.787433  864470 out.go:203] 
	W1006 14:59:44.790298  864470 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1006 14:59:44.793259  864470 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-933184 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-933184 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-933184 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (201.542736ms)

                                                
                                                
-- stdout --
	* [functional-933184] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 14:59:44.468629  864423 out.go:360] Setting OutFile to fd 1 ...
	I1006 14:59:44.468776  864423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:59:44.468787  864423 out.go:374] Setting ErrFile to fd 2...
	I1006 14:59:44.468792  864423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 14:59:44.470540  864423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 14:59:44.471022  864423 out.go:368] Setting JSON to false
	I1006 14:59:44.472099  864423 start.go:130] hostinfo: {"hostname":"ip-172-31-29-130","uptime":78136,"bootTime":1759684648,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1006 14:59:44.472192  864423 start.go:140] virtualization:  
	I1006 14:59:44.476013  864423 out.go:179] * [functional-933184] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1006 14:59:44.479102  864423 out.go:179]   - MINIKUBE_LOCATION=21701
	I1006 14:59:44.479149  864423 notify.go:220] Checking for updates...
	I1006 14:59:44.485005  864423 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1006 14:59:44.488089  864423 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	I1006 14:59:44.491518  864423 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	I1006 14:59:44.494543  864423 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1006 14:59:44.497601  864423 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1006 14:59:44.500849  864423 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 14:59:44.501421  864423 driver.go:421] Setting default libvirt URI to qemu:///system
	I1006 14:59:44.529217  864423 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1006 14:59:44.529343  864423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 14:59:44.592880  864423 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-06 14:59:44.580018642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 14:59:44.592988  864423 docker.go:318] overlay module found
	I1006 14:59:44.596071  864423 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1006 14:59:44.599035  864423 start.go:304] selected driver: docker
	I1006 14:59:44.599054  864423 start.go:924] validating driver "docker" against &{Name:functional-933184 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-933184 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1006 14:59:44.599275  864423 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1006 14:59:44.603546  864423 out.go:203] 
	W1006 14:59:44.606406  864423 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1006 14:59:44.609161  864423 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh -n functional-933184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 cp functional-933184:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd11068724/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh -n functional-933184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh -n functional-933184 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/805351/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "sudo cat /etc/test/nested/copy/805351/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/805351.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "sudo cat /etc/ssl/certs/805351.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/805351.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "sudo cat /usr/share/ca-certificates/805351.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8053512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "sudo cat /etc/ssl/certs/8053512.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8053512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "sudo cat /usr/share/ca-certificates/8053512.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-933184 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-933184 ssh "sudo systemctl is-active crio": exit status 1 (295.671995ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-933184 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-933184 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-933184 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 859251: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-933184 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-933184 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-933184 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [bb1e9934-d51c-4d28-a822-7039883994d1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [bb1e9934-d51c-4d28-a822-7039883994d1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003456052s
I1006 14:49:36.036562  805351 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-933184 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.166.75 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-933184 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (364.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-933184 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-933184 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-v749q" [57f45961-3c12-411b-8d51-7296ab506a54] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1006 14:53:55.639057  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 14:58:27.932311  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "hello-node-75c85bcc94-v749q" [57f45961-3c12-411b-8d51-7296ab506a54] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6m4.00875536s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (364.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "368.055556ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "80.593372ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "449.570764ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "58.432087ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-933184 /tmp/TestFunctionalparallelMountCmdany-port2009931317/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759762781118553875" to /tmp/TestFunctionalparallelMountCmdany-port2009931317/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759762781118553875" to /tmp/TestFunctionalparallelMountCmdany-port2009931317/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759762781118553875" to /tmp/TestFunctionalparallelMountCmdany-port2009931317/001/test-1759762781118553875
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-933184 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (512.302893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1006 14:59:41.632271  805351 retry.go:31] will retry after 327.830566ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  6 14:59 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  6 14:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  6 14:59 test-1759762781118553875
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh cat /mount-9p/test-1759762781118553875
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-933184 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [78d80332-eac8-4b03-bb91-5517ae7bb3b5] Pending
helpers_test.go:352: "busybox-mount" [78d80332-eac8-4b03-bb91-5517ae7bb3b5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [78d80332-eac8-4b03-bb91-5517ae7bb3b5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [78d80332-eac8-4b03-bb91-5517ae7bb3b5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003685044s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-933184 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-933184 /tmp/TestFunctionalparallelMountCmdany-port2009931317/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 service list -o json
functional_test.go:1504: Took "645.358819ms" to run "out/minikube-linux-arm64 -p functional-933184 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30832
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30832
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-933184 /tmp/TestFunctionalparallelMountCmdspecific-port2407401783/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-933184 /tmp/TestFunctionalparallelMountCmdspecific-port2407401783/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-933184 ssh "sudo umount -f /mount-9p": exit status 1 (276.060066ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-933184 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-933184 /tmp/TestFunctionalparallelMountCmdspecific-port2407401783/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-933184 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732597596/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-933184 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732597596/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-933184 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732597596/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-933184 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-933184 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732597596/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-933184 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732597596/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-933184 /tmp/TestFunctionalparallelMountCmdVerifyCleanup732597596/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-933184 version -o=json --components: (1.045149151s)
--- PASS: TestFunctional/parallel/Version/components (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-933184 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-933184
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-933184
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-933184 image ls --format short --alsologtostderr:
I1006 15:00:04.560487  867465 out.go:360] Setting OutFile to fd 1 ...
I1006 15:00:04.560634  867465 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 15:00:04.560645  867465 out.go:374] Setting ErrFile to fd 2...
I1006 15:00:04.560650  867465 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 15:00:04.561024  867465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
I1006 15:00:04.561957  867465 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 15:00:04.562105  867465 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 15:00:04.562795  867465 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
I1006 15:00:04.590886  867465 ssh_runner.go:195] Run: systemctl --version
I1006 15:00:04.591123  867465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
I1006 15:00:04.608680  867465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
I1006 15:00:04.706512  867465 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-933184 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ 05baa95f5142d │ 74.7MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ a1894772a478e │ 205MB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ ba04bb24b9575 │ 29MB   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ localhost/my-image                          │ functional-933184 │ 65c596ace6bb8 │ 1.41MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ 7eb2c6ff0c5a7 │ 71.5MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ b5f57ec6b9867 │ 50.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 3d18732f8686c │ 484kB  │
│ registry.k8s.io/pause                       │ 3.1               │ 8057e0500773a │ 525kB  │
│ docker.io/library/minikube-local-cache-test │ functional-933184 │ c634c1d065e95 │ 30B    │
│ registry.k8s.io/pause                       │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 138784d87c9c5 │ 72.1MB │
│ docker.io/kicbase/echo-server               │ functional-933184 │ ce2d2cda2d858 │ 4.78MB │
│ docker.io/kicbase/echo-server               │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ 43911e833d64d │ 83.7MB │
│ docker.io/library/nginx                     │ alpine            │ 35f3cbee4fb77 │ 52.9MB │
│ registry.k8s.io/pause                       │ latest            │ 8cb2091f603e7 │ 240kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-933184 image ls --format table --alsologtostderr:
I1006 15:00:08.893632  867819 out.go:360] Setting OutFile to fd 1 ...
I1006 15:00:08.893794  867819 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 15:00:08.893828  867819 out.go:374] Setting ErrFile to fd 2...
I1006 15:00:08.893840  867819 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 15:00:08.894162  867819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
I1006 15:00:08.894824  867819 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 15:00:08.894970  867819 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 15:00:08.895482  867819 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
I1006 15:00:08.914002  867819 ssh_runner.go:195] Run: systemctl --version
I1006 15:00:08.914073  867819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
I1006 15:00:08.933236  867819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
I1006 15:00:09.034712  867819 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-933184 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"65c596ace6bb80fb53aea888550800d96862d05550ee50369ab2e2ee8df05e03","repoDigests":[],"repoTags":["localhost/my-image:functional-933184"],"size":"1410000"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"83700000"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"71500000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd
1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"74700000"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205000000"},{"id":"c634c1d065e951ade1879cac2c2ea81f9e222f2a511be45173eb700d7584b74d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-933184"],"size":"30"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52900000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-serve
r:functional-933184","docker.io/kicbase/echo-server:latest"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"50500000"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"72100000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-933184 image ls --format json --alsologtostderr:
I1006 15:00:08.670621  867784 out.go:360] Setting OutFile to fd 1 ...
I1006 15:00:08.670840  867784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 15:00:08.670852  867784 out.go:374] Setting ErrFile to fd 2...
I1006 15:00:08.670858  867784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 15:00:08.671202  867784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
I1006 15:00:08.672100  867784 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 15:00:08.672287  867784 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 15:00:08.672818  867784 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
I1006 15:00:08.693129  867784 ssh_runner.go:195] Run: systemctl --version
I1006 15:00:08.693189  867784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
I1006 15:00:08.711875  867784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
I1006 15:00:08.810881  867784 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-933184 image ls --format yaml --alsologtostderr:
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "50500000"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "74700000"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52900000"
- id: c634c1d065e951ade1879cac2c2ea81f9e222f2a511be45173eb700d7584b74d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-933184
size: "30"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "72100000"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "71500000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-933184
- docker.io/kicbase/echo-server:latest
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "83700000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-933184 image ls --format yaml --alsologtostderr:
I1006 15:00:04.792415  867513 out.go:360] Setting OutFile to fd 1 ...
I1006 15:00:04.792623  867513 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 15:00:04.792651  867513 out.go:374] Setting ErrFile to fd 2...
I1006 15:00:04.792672  867513 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 15:00:04.793093  867513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
I1006 15:00:04.794211  867513 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 15:00:04.794437  867513 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 15:00:04.795374  867513 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
I1006 15:00:04.814423  867513 ssh_runner.go:195] Run: systemctl --version
I1006 15:00:04.814486  867513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
I1006 15:00:04.832933  867513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
I1006 15:00:04.930569  867513 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-933184 ssh pgrep buildkitd: exit status 1 (304.771592ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image build -t localhost/my-image:functional-933184 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-933184 image build -t localhost/my-image:functional-933184 testdata/build --alsologtostderr: (3.125951244s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-933184 image build -t localhost/my-image:functional-933184 testdata/build --alsologtostderr:
I1006 15:00:05.316585  867612 out.go:360] Setting OutFile to fd 1 ...
I1006 15:00:05.318033  867612 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 15:00:05.318084  867612 out.go:374] Setting ErrFile to fd 2...
I1006 15:00:05.318107  867612 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1006 15:00:05.318446  867612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
I1006 15:00:05.319166  867612 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 15:00:05.321342  867612 config.go:182] Loaded profile config "functional-933184": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1006 15:00:05.321929  867612 cli_runner.go:164] Run: docker container inspect functional-933184 --format={{.State.Status}}
I1006 15:00:05.340132  867612 ssh_runner.go:195] Run: systemctl --version
I1006 15:00:05.340190  867612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-933184
I1006 15:00:05.365484  867612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37516 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/functional-933184/id_rsa Username:docker}
I1006 15:00:05.462570  867612 build_images.go:161] Building image from path: /tmp/build.3228873122.tar
I1006 15:00:05.462639  867612 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1006 15:00:05.471118  867612 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3228873122.tar
I1006 15:00:05.474886  867612 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3228873122.tar: stat -c "%s %y" /var/lib/minikube/build/build.3228873122.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3228873122.tar': No such file or directory
I1006 15:00:05.474918  867612 ssh_runner.go:362] scp /tmp/build.3228873122.tar --> /var/lib/minikube/build/build.3228873122.tar (3072 bytes)
I1006 15:00:05.494099  867612 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3228873122
I1006 15:00:05.502330  867612 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3228873122 -xf /var/lib/minikube/build/build.3228873122.tar
I1006 15:00:05.511165  867612 docker.go:361] Building image: /var/lib/minikube/build/build.3228873122
I1006 15:00:05.511241  867612 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-933184 /var/lib/minikube/build/build.3228873122
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:65c596ace6bb80fb53aea888550800d96862d05550ee50369ab2e2ee8df05e03 done
#8 naming to localhost/my-image:functional-933184 done
#8 DONE 0.1s
I1006 15:00:08.367165  867612 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-933184 /var/lib/minikube/build/build.3228873122: (2.855896759s)
I1006 15:00:08.367239  867612 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3228873122
I1006 15:00:08.375569  867612 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3228873122.tar
I1006 15:00:08.383964  867612 build_images.go:217] Built localhost/my-image:functional-933184 from /tmp/build.3228873122.tar
I1006 15:00:08.383994  867612 build_images.go:133] succeeded building to: functional-933184
I1006 15:00:08.384002  867612 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-933184
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image load --daemon kicbase/echo-server:functional-933184 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image load --daemon kicbase/echo-server:functional-933184 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-933184
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image load --daemon kicbase/echo-server:functional-933184 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image save kicbase/echo-server:functional-933184 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image rm kicbase/echo-server:functional-933184 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-933184
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 image save --daemon kicbase/echo-server:functional-933184 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-933184
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 update-context --alsologtostderr -v=2
E1006 15:03:27.931512  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-933184 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-933184 docker-env) && out/minikube-linux-arm64 status -p functional-933184"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-933184 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.06s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-933184
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-933184
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-933184
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (168.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1006 15:04:51.000963  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m47.157703227s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (168.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 kubectl -- rollout status deployment/busybox: (5.702735382s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-58cqf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-wwdmx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-xz7vk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-58cqf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-wwdmx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-xz7vk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-58cqf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-wwdmx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-xz7vk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-58cqf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-58cqf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-wwdmx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-wwdmx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-xz7vk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 kubectl -- exec busybox-7b57f96db7-xz7vk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (37.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 node add --alsologtostderr -v 5: (36.152217177s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5: (1.138701041s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (37.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-477199 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.086574615s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 status --output json --alsologtostderr -v 5
E1006 15:08:27.931901  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 status --output json --alsologtostderr -v 5: (1.104697893s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp testdata/cp-test.txt ha-477199:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1737995947/001/cp-test_ha-477199.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199:/home/docker/cp-test.txt ha-477199-m02:/home/docker/cp-test_ha-477199_ha-477199-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m02 "sudo cat /home/docker/cp-test_ha-477199_ha-477199-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199:/home/docker/cp-test.txt ha-477199-m03:/home/docker/cp-test_ha-477199_ha-477199-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m03 "sudo cat /home/docker/cp-test_ha-477199_ha-477199-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199:/home/docker/cp-test.txt ha-477199-m04:/home/docker/cp-test_ha-477199_ha-477199-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m04 "sudo cat /home/docker/cp-test_ha-477199_ha-477199-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp testdata/cp-test.txt ha-477199-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1737995947/001/cp-test_ha-477199-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m02:/home/docker/cp-test.txt ha-477199:/home/docker/cp-test_ha-477199-m02_ha-477199.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199 "sudo cat /home/docker/cp-test_ha-477199-m02_ha-477199.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m02:/home/docker/cp-test.txt ha-477199-m03:/home/docker/cp-test_ha-477199-m02_ha-477199-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m03 "sudo cat /home/docker/cp-test_ha-477199-m02_ha-477199-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m02:/home/docker/cp-test.txt ha-477199-m04:/home/docker/cp-test_ha-477199-m02_ha-477199-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m04 "sudo cat /home/docker/cp-test_ha-477199-m02_ha-477199-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp testdata/cp-test.txt ha-477199-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1737995947/001/cp-test_ha-477199-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m03:/home/docker/cp-test.txt ha-477199:/home/docker/cp-test_ha-477199-m03_ha-477199.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199 "sudo cat /home/docker/cp-test_ha-477199-m03_ha-477199.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m03:/home/docker/cp-test.txt ha-477199-m02:/home/docker/cp-test_ha-477199-m03_ha-477199-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m02 "sudo cat /home/docker/cp-test_ha-477199-m03_ha-477199-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m03:/home/docker/cp-test.txt ha-477199-m04:/home/docker/cp-test_ha-477199-m03_ha-477199-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m04 "sudo cat /home/docker/cp-test_ha-477199-m03_ha-477199-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp testdata/cp-test.txt ha-477199-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1737995947/001/cp-test_ha-477199-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m04:/home/docker/cp-test.txt ha-477199:/home/docker/cp-test_ha-477199-m04_ha-477199.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199 "sudo cat /home/docker/cp-test_ha-477199-m04_ha-477199.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m04:/home/docker/cp-test.txt ha-477199-m02:/home/docker/cp-test_ha-477199-m04_ha-477199-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m02 "sudo cat /home/docker/cp-test_ha-477199-m04_ha-477199-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 cp ha-477199-m04:/home/docker/cp-test.txt ha-477199-m03:/home/docker/cp-test_ha-477199-m04_ha-477199-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 ssh -n ha-477199-m03 "sudo cat /home/docker/cp-test_ha-477199-m04_ha-477199-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 node stop m02 --alsologtostderr -v 5: (11.055286821s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5: exit status 7 (854.161693ms)

                                                
                                                
-- stdout --
	ha-477199
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-477199-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-477199-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-477199-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 15:08:59.246265  890972 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:08:59.246529  890972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:08:59.246561  890972 out.go:374] Setting ErrFile to fd 2...
	I1006 15:08:59.246580  890972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:08:59.246992  890972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 15:08:59.247302  890972 out.go:368] Setting JSON to false
	I1006 15:08:59.247374  890972 mustload.go:65] Loading cluster: ha-477199
	I1006 15:08:59.247455  890972 notify.go:220] Checking for updates...
	I1006 15:08:59.248773  890972 config.go:182] Loaded profile config "ha-477199": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 15:08:59.248824  890972 status.go:174] checking status of ha-477199 ...
	I1006 15:08:59.249438  890972 cli_runner.go:164] Run: docker container inspect ha-477199 --format={{.State.Status}}
	I1006 15:08:59.272834  890972 status.go:371] ha-477199 host status = "Running" (err=<nil>)
	I1006 15:08:59.272857  890972 host.go:66] Checking if "ha-477199" exists ...
	I1006 15:08:59.273172  890972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-477199
	I1006 15:08:59.306840  890972 host.go:66] Checking if "ha-477199" exists ...
	I1006 15:08:59.307322  890972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:08:59.307381  890972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-477199
	I1006 15:08:59.328180  890972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37521 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/ha-477199/id_rsa Username:docker}
	I1006 15:08:59.438035  890972 ssh_runner.go:195] Run: systemctl --version
	I1006 15:08:59.444995  890972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 15:08:59.469068  890972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:08:59.541387  890972 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-06 15:08:59.530616058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 15:08:59.541975  890972 kubeconfig.go:125] found "ha-477199" server: "https://192.168.49.254:8443"
	I1006 15:08:59.542019  890972 api_server.go:166] Checking apiserver status ...
	I1006 15:08:59.542066  890972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 15:08:59.557447  890972 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2275/cgroup
	I1006 15:08:59.566522  890972 api_server.go:182] apiserver freezer: "6:freezer:/docker/8e95477477145700b2306dd8e9cd2a92fddaec5416a0b386cb044c60a135625e/kubepods/burstable/podad9e08074f6fc242952260a61c3027cd/71f4ccedfbf21bc7d7815333920c77322565a1830cf6817fec858f43f49dc723"
	I1006 15:08:59.566658  890972 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8e95477477145700b2306dd8e9cd2a92fddaec5416a0b386cb044c60a135625e/kubepods/burstable/podad9e08074f6fc242952260a61c3027cd/71f4ccedfbf21bc7d7815333920c77322565a1830cf6817fec858f43f49dc723/freezer.state
	I1006 15:08:59.581336  890972 api_server.go:204] freezer state: "THAWED"
	I1006 15:08:59.581368  890972 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1006 15:08:59.591853  890972 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1006 15:08:59.591899  890972 status.go:463] ha-477199 apiserver status = Running (err=<nil>)
	I1006 15:08:59.591910  890972 status.go:176] ha-477199 status: &{Name:ha-477199 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 15:08:59.591927  890972 status.go:174] checking status of ha-477199-m02 ...
	I1006 15:08:59.592240  890972 cli_runner.go:164] Run: docker container inspect ha-477199-m02 --format={{.State.Status}}
	I1006 15:08:59.609486  890972 status.go:371] ha-477199-m02 host status = "Stopped" (err=<nil>)
	I1006 15:08:59.609512  890972 status.go:384] host is not running, skipping remaining checks
	I1006 15:08:59.609519  890972 status.go:176] ha-477199-m02 status: &{Name:ha-477199-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 15:08:59.609541  890972 status.go:174] checking status of ha-477199-m03 ...
	I1006 15:08:59.609862  890972 cli_runner.go:164] Run: docker container inspect ha-477199-m03 --format={{.State.Status}}
	I1006 15:08:59.627413  890972 status.go:371] ha-477199-m03 host status = "Running" (err=<nil>)
	I1006 15:08:59.627436  890972 host.go:66] Checking if "ha-477199-m03" exists ...
	I1006 15:08:59.627895  890972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-477199-m03
	I1006 15:08:59.646074  890972 host.go:66] Checking if "ha-477199-m03" exists ...
	I1006 15:08:59.646466  890972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:08:59.646527  890972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-477199-m03
	I1006 15:08:59.664871  890972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37531 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/ha-477199-m03/id_rsa Username:docker}
	I1006 15:08:59.765563  890972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 15:08:59.780726  890972 kubeconfig.go:125] found "ha-477199" server: "https://192.168.49.254:8443"
	I1006 15:08:59.780759  890972 api_server.go:166] Checking apiserver status ...
	I1006 15:08:59.780801  890972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 15:08:59.795242  890972 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2080/cgroup
	I1006 15:08:59.806680  890972 api_server.go:182] apiserver freezer: "6:freezer:/docker/b50ead2110930a0f412708c843f15ebdbdf08e781165e141ab073bc2ac5531cc/kubepods/burstable/pod2d5810ef4294a72edb2a93de8e7a5618/95615069dfa9671881a33e6d946e51e289da7ec4eefffff669e51eb47b47572d"
	I1006 15:08:59.806775  890972 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b50ead2110930a0f412708c843f15ebdbdf08e781165e141ab073bc2ac5531cc/kubepods/burstable/pod2d5810ef4294a72edb2a93de8e7a5618/95615069dfa9671881a33e6d946e51e289da7ec4eefffff669e51eb47b47572d/freezer.state
	I1006 15:08:59.815010  890972 api_server.go:204] freezer state: "THAWED"
	I1006 15:08:59.815039  890972 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1006 15:08:59.823596  890972 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1006 15:08:59.823629  890972 status.go:463] ha-477199-m03 apiserver status = Running (err=<nil>)
	I1006 15:08:59.823641  890972 status.go:176] ha-477199-m03 status: &{Name:ha-477199-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 15:08:59.823763  890972 status.go:174] checking status of ha-477199-m04 ...
	I1006 15:08:59.824097  890972 cli_runner.go:164] Run: docker container inspect ha-477199-m04 --format={{.State.Status}}
	I1006 15:08:59.842484  890972 status.go:371] ha-477199-m04 host status = "Running" (err=<nil>)
	I1006 15:08:59.842534  890972 host.go:66] Checking if "ha-477199-m04" exists ...
	I1006 15:08:59.842906  890972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-477199-m04
	I1006 15:08:59.862850  890972 host.go:66] Checking if "ha-477199-m04" exists ...
	I1006 15:08:59.863185  890972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:08:59.863231  890972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-477199-m04
	I1006 15:08:59.882675  890972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37536 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/ha-477199-m04/id_rsa Username:docker}
	I1006 15:08:59.989795  890972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 15:09:00.039528  890972 status.go:176] ha-477199-m04 status: &{Name:ha-477199-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.090877155s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 node start m02 --alsologtostderr -v 5
E1006 15:09:26.567876  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:26.574239  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:26.585649  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:26.607144  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:26.648608  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:26.730032  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:26.891622  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:27.213250  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:27.855011  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:29.136266  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:31.698424  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:36.819815  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:09:47.061680  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 node start m02 --alsologtostderr -v 5: (46.110187459s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5: (1.106237344s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (47.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.207841184s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 stop --alsologtostderr -v 5
E1006 15:10:07.543548  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 stop --alsologtostderr -v 5: (34.963047991s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 start --wait true --alsologtostderr -v 5
E1006 15:10:48.509150  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:12:10.430991  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:13:27.932555  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 start --wait true --alsologtostderr -v 5: (3m19.696319868s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 node delete m03 --alsologtostderr -v 5: (10.912602439s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 stop --alsologtostderr -v 5
E1006 15:14:26.567160  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 stop --alsologtostderr -v 5: (32.627778985s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5: exit status 7 (120.752996ms)

                                                
                                                
-- stdout --
	ha-477199
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-477199-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-477199-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 15:14:29.909347  918971 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:14:29.909539  918971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:14:29.909572  918971 out.go:374] Setting ErrFile to fd 2...
	I1006 15:14:29.909593  918971 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:14:29.909889  918971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 15:14:29.910119  918971 out.go:368] Setting JSON to false
	I1006 15:14:29.910189  918971 mustload.go:65] Loading cluster: ha-477199
	I1006 15:14:29.910660  918971 config.go:182] Loaded profile config "ha-477199": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 15:14:29.910736  918971 status.go:174] checking status of ha-477199 ...
	I1006 15:14:29.910227  918971 notify.go:220] Checking for updates...
	I1006 15:14:29.911894  918971 cli_runner.go:164] Run: docker container inspect ha-477199 --format={{.State.Status}}
	I1006 15:14:29.931769  918971 status.go:371] ha-477199 host status = "Stopped" (err=<nil>)
	I1006 15:14:29.931794  918971 status.go:384] host is not running, skipping remaining checks
	I1006 15:14:29.931801  918971 status.go:176] ha-477199 status: &{Name:ha-477199 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 15:14:29.931835  918971 status.go:174] checking status of ha-477199-m02 ...
	I1006 15:14:29.932214  918971 cli_runner.go:164] Run: docker container inspect ha-477199-m02 --format={{.State.Status}}
	I1006 15:14:29.956591  918971 status.go:371] ha-477199-m02 host status = "Stopped" (err=<nil>)
	I1006 15:14:29.956613  918971 status.go:384] host is not running, skipping remaining checks
	I1006 15:14:29.956621  918971 status.go:176] ha-477199-m02 status: &{Name:ha-477199-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 15:14:29.956640  918971 status.go:174] checking status of ha-477199-m04 ...
	I1006 15:14:29.956980  918971 cli_runner.go:164] Run: docker container inspect ha-477199-m04 --format={{.State.Status}}
	I1006 15:14:29.978159  918971 status.go:371] ha-477199-m04 host status = "Stopped" (err=<nil>)
	I1006 15:14:29.978181  918971 status.go:384] host is not running, skipping remaining checks
	I1006 15:14:29.978188  918971 status.go:176] ha-477199-m04 status: &{Name:ha-477199-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (119.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1006 15:14:54.272319  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m58.108903075s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (119.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (91.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 node add --control-plane --alsologtostderr -v 5: (1m30.880487012s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-477199 status --alsologtostderr -v 5: (1.101340084s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (91.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.129895987s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (36.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-954664 --driver=docker  --container-runtime=docker
E1006 15:18:27.931503  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-954664 --driver=docker  --container-runtime=docker: (36.667740974s)
--- PASS: TestImageBuild/serial/Setup (36.67s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-954664
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-954664: (1.674221313s)
--- PASS: TestImageBuild/serial/NormalBuild (1.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-954664
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-954664
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-954664
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-633861 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E1006 15:19:26.573001  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-633861 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m18.133450834s)
--- PASS: TestJSONOutput/start/Command (78.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-633861 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-633861 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-633861 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-633861 --output=json --user=testUser: (10.990833652s)
--- PASS: TestJSONOutput/stop/Command (10.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-884793 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-884793 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (102.7489ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a7d5f19-3b9c-45fd-bff6-37514fdc26d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-884793] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7ece357-8d56-4070-a003-936f0c0dcbd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21701"}}
	{"specversion":"1.0","id":"3866e18b-863b-4f05-aea8-0bb1a0ff75ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c852a05-8c27-4104-9bda-849334c2af27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig"}}
	{"specversion":"1.0","id":"e79ed6f7-4a87-436a-975b-810b642e71a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube"}}
	{"specversion":"1.0","id":"47913cdb-2eee-417f-89ea-439485063073","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"918c01d7-36a5-479e-ad98-378c0c19b05f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2842f93f-c157-4c4e-88fc-77e6dae45158","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-884793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-884793
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-361644 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-361644 --network=: (35.667256823s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-361644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-361644
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-361644: (2.058499851s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.75s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (39.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-010358 --network=bridge
E1006 15:21:31.004412  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-010358 --network=bridge: (37.723572788s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-010358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-010358
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-010358: (2.058798192s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (39.81s)

                                                
                                    
x
+
TestKicExistingNetwork (33.75s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1006 15:21:44.645840  805351 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1006 15:21:44.662313  805351 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1006 15:21:44.662384  805351 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1006 15:21:44.662401  805351 cli_runner.go:164] Run: docker network inspect existing-network
W1006 15:21:44.679040  805351 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1006 15:21:44.679069  805351 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1006 15:21:44.679082  805351 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1006 15:21:44.679234  805351 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1006 15:21:44.695724  805351 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ee97fbb35735 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:8d:6e:87:cb:e6} reservation:<nil>}
I1006 15:21:44.696024  805351 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40003d3610}
I1006 15:21:44.696044  805351 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1006 15:21:44.696102  805351 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1006 15:21:44.756658  805351 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-679665 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-679665 --network=existing-network: (31.552368092s)
helpers_test.go:175: Cleaning up "existing-network-679665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-679665
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-679665: (2.057044072s)
I1006 15:22:18.382906  805351 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.75s)

                                                
                                    
x
+
TestKicCustomSubnet (34.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-721030 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-721030 --subnet=192.168.60.0/24: (32.479561226s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-721030 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-721030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-721030
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-721030: (2.136208337s)
--- PASS: TestKicCustomSubnet (34.64s)

                                                
                                    
x
+
TestKicStaticIP (37.68s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-778334 --static-ip=192.168.200.200
E1006 15:23:27.932481  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-778334 --static-ip=192.168.200.200: (35.286388185s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-778334 ip
helpers_test.go:175: Cleaning up "static-ip-778334" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-778334
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-778334: (2.242807449s)
--- PASS: TestKicStaticIP (37.68s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-333527 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-333527 --driver=docker  --container-runtime=docker: (31.462054692s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-336021 --driver=docker  --container-runtime=docker
E1006 15:24:26.571863  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-336021 --driver=docker  --container-runtime=docker: (37.116683419s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-333527
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-336021
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-336021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-336021
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-336021: (2.371506507s)
helpers_test.go:175: Cleaning up "first-333527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-333527
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-333527: (2.276232667s)
--- PASS: TestMinikubeProfile (74.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-622188 --memory=3072 --mount-string /tmp/TestMountStartserial2903003230/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-622188 --memory=3072 --mount-string /tmp/TestMountStartserial2903003230/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.44013437s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-622188 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (11.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-623993 --memory=3072 --mount-string /tmp/TestMountStartserial2903003230/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-623993 --memory=3072 --mount-string /tmp/TestMountStartserial2903003230/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (10.132420811s)
--- PASS: TestMountStart/serial/StartWithMountSecond (11.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-623993 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-622188 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-622188 --alsologtostderr -v=5: (1.477236097s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-623993 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-623993
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-623993: (1.202207267s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-623993
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-623993: (7.665161418s)
--- PASS: TestMountStart/serial/RestartStopped (8.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-623993 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (93.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-119221 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1006 15:25:49.633628  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-119221 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m32.685157065s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (93.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-119221 -- rollout status deployment/busybox: (4.726910934s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- exec busybox-7b57f96db7-cwh5c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- exec busybox-7b57f96db7-tdb47 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- exec busybox-7b57f96db7-cwh5c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- exec busybox-7b57f96db7-tdb47 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- exec busybox-7b57f96db7-cwh5c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- exec busybox-7b57f96db7-tdb47 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- exec busybox-7b57f96db7-cwh5c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- exec busybox-7b57f96db7-cwh5c -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- exec busybox-7b57f96db7-tdb47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-119221 -- exec busybox-7b57f96db7-tdb47 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.28s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-119221 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-119221 -v=5 --alsologtostderr: (34.416824113s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.15s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-119221 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp testdata/cp-test.txt multinode-119221:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp multinode-119221:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile14057890/001/cp-test_multinode-119221.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp multinode-119221:/home/docker/cp-test.txt multinode-119221-m02:/home/docker/cp-test_multinode-119221_multinode-119221-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m02 "sudo cat /home/docker/cp-test_multinode-119221_multinode-119221-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp multinode-119221:/home/docker/cp-test.txt multinode-119221-m03:/home/docker/cp-test_multinode-119221_multinode-119221-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m03 "sudo cat /home/docker/cp-test_multinode-119221_multinode-119221-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp testdata/cp-test.txt multinode-119221-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp multinode-119221-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile14057890/001/cp-test_multinode-119221-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp multinode-119221-m02:/home/docker/cp-test.txt multinode-119221:/home/docker/cp-test_multinode-119221-m02_multinode-119221.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221 "sudo cat /home/docker/cp-test_multinode-119221-m02_multinode-119221.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp multinode-119221-m02:/home/docker/cp-test.txt multinode-119221-m03:/home/docker/cp-test_multinode-119221-m02_multinode-119221-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m03 "sudo cat /home/docker/cp-test_multinode-119221-m02_multinode-119221-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp testdata/cp-test.txt multinode-119221-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp multinode-119221-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile14057890/001/cp-test_multinode-119221-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp multinode-119221-m03:/home/docker/cp-test.txt multinode-119221:/home/docker/cp-test_multinode-119221-m03_multinode-119221.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221 "sudo cat /home/docker/cp-test_multinode-119221-m03_multinode-119221.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 cp multinode-119221-m03:/home/docker/cp-test.txt multinode-119221-m02:/home/docker/cp-test_multinode-119221-m03_multinode-119221-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 ssh -n multinode-119221-m02 "sudo cat /home/docker/cp-test_multinode-119221-m03_multinode-119221-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-119221 node stop m03: (1.219833275s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-119221 status: exit status 7 (516.318161ms)

                                                
                                                
-- stdout --
	multinode-119221
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-119221-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-119221-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-119221 status --alsologtostderr: exit status 7 (539.781091ms)

                                                
                                                
-- stdout --
	multinode-119221
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-119221-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-119221-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 15:27:48.946345  992356 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:27:48.947048  992356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:27:48.947092  992356 out.go:374] Setting ErrFile to fd 2...
	I1006 15:27:48.947115  992356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:27:48.947427  992356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 15:27:48.947721  992356 out.go:368] Setting JSON to false
	I1006 15:27:48.947782  992356 mustload.go:65] Loading cluster: multinode-119221
	I1006 15:27:48.948214  992356 config.go:182] Loaded profile config "multinode-119221": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 15:27:48.948252  992356 status.go:174] checking status of multinode-119221 ...
	I1006 15:27:48.948811  992356 cli_runner.go:164] Run: docker container inspect multinode-119221 --format={{.State.Status}}
	I1006 15:27:48.949182  992356 notify.go:220] Checking for updates...
	I1006 15:27:48.967990  992356 status.go:371] multinode-119221 host status = "Running" (err=<nil>)
	I1006 15:27:48.968014  992356 host.go:66] Checking if "multinode-119221" exists ...
	I1006 15:27:48.968385  992356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-119221
	I1006 15:27:48.999808  992356 host.go:66] Checking if "multinode-119221" exists ...
	I1006 15:27:49.000160  992356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:27:49.000217  992356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-119221
	I1006 15:27:49.019773  992356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37646 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/multinode-119221/id_rsa Username:docker}
	I1006 15:27:49.117681  992356 ssh_runner.go:195] Run: systemctl --version
	I1006 15:27:49.125014  992356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 15:27:49.138653  992356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1006 15:27:49.205599  992356 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-06 15:27:49.196027399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1006 15:27:49.206165  992356 kubeconfig.go:125] found "multinode-119221" server: "https://192.168.67.2:8443"
	I1006 15:27:49.206194  992356 api_server.go:166] Checking apiserver status ...
	I1006 15:27:49.206237  992356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1006 15:27:49.221242  992356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2201/cgroup
	I1006 15:27:49.230194  992356 api_server.go:182] apiserver freezer: "6:freezer:/docker/f35989bfcc656e118ddd28bb39dc0d01246778f0624596989a24dec1b59f11bb/kubepods/burstable/pod0c36e480d610601a9e4f7873bd070933/e7552772668c3b2a0d424bf91f7a5e97d7b041290db39f62ba908fbed9bd9e2d"
	I1006 15:27:49.230280  992356 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f35989bfcc656e118ddd28bb39dc0d01246778f0624596989a24dec1b59f11bb/kubepods/burstable/pod0c36e480d610601a9e4f7873bd070933/e7552772668c3b2a0d424bf91f7a5e97d7b041290db39f62ba908fbed9bd9e2d/freezer.state
	I1006 15:27:49.238534  992356 api_server.go:204] freezer state: "THAWED"
	I1006 15:27:49.238567  992356 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1006 15:27:49.246785  992356 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1006 15:27:49.246820  992356 status.go:463] multinode-119221 apiserver status = Running (err=<nil>)
	I1006 15:27:49.246855  992356 status.go:176] multinode-119221 status: &{Name:multinode-119221 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 15:27:49.246881  992356 status.go:174] checking status of multinode-119221-m02 ...
	I1006 15:27:49.247217  992356 cli_runner.go:164] Run: docker container inspect multinode-119221-m02 --format={{.State.Status}}
	I1006 15:27:49.264628  992356 status.go:371] multinode-119221-m02 host status = "Running" (err=<nil>)
	I1006 15:27:49.264653  992356 host.go:66] Checking if "multinode-119221-m02" exists ...
	I1006 15:27:49.264958  992356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-119221-m02
	I1006 15:27:49.282531  992356 host.go:66] Checking if "multinode-119221-m02" exists ...
	I1006 15:27:49.282854  992356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1006 15:27:49.282902  992356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-119221-m02
	I1006 15:27:49.301176  992356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37651 SSHKeyPath:/home/jenkins/minikube-integration/21701-803497/.minikube/machines/multinode-119221-m02/id_rsa Username:docker}
	I1006 15:27:49.396863  992356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1006 15:27:49.410827  992356 status.go:176] multinode-119221-m02 status: &{Name:multinode-119221-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1006 15:27:49.410860  992356 status.go:174] checking status of multinode-119221-m03 ...
	I1006 15:27:49.411178  992356 cli_runner.go:164] Run: docker container inspect multinode-119221-m03 --format={{.State.Status}}
	I1006 15:27:49.429552  992356 status.go:371] multinode-119221-m03 host status = "Stopped" (err=<nil>)
	I1006 15:27:49.429575  992356 status.go:384] host is not running, skipping remaining checks
	I1006 15:27:49.429583  992356 status.go:176] multinode-119221-m03 status: &{Name:multinode-119221-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-119221 node start m03 -v=5 --alsologtostderr: (8.5361163s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-119221
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-119221
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-119221: (23.199399921s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-119221 --wait=true -v=5 --alsologtostderr
E1006 15:28:27.931879  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-119221 --wait=true -v=5 --alsologtostderr: (51.233593355s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-119221
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-119221 node delete m03: (5.051328723s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 stop
E1006 15:29:26.572945  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-119221 stop: (21.485950541s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-119221 status: exit status 7 (102.311564ms)

                                                
                                                
-- stdout --
	multinode-119221
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-119221-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-119221 status --alsologtostderr: exit status 7 (94.604712ms)

                                                
                                                
-- stdout --
	multinode-119221
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-119221-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1006 15:29:40.723999 1006065 out.go:360] Setting OutFile to fd 1 ...
	I1006 15:29:40.724123 1006065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:29:40.724134 1006065 out.go:374] Setting ErrFile to fd 2...
	I1006 15:29:40.724138 1006065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1006 15:29:40.724396 1006065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21701-803497/.minikube/bin
	I1006 15:29:40.724579 1006065 out.go:368] Setting JSON to false
	I1006 15:29:40.724615 1006065 mustload.go:65] Loading cluster: multinode-119221
	I1006 15:29:40.724697 1006065 notify.go:220] Checking for updates...
	I1006 15:29:40.724992 1006065 config.go:182] Loaded profile config "multinode-119221": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1006 15:29:40.725009 1006065 status.go:174] checking status of multinode-119221 ...
	I1006 15:29:40.725539 1006065 cli_runner.go:164] Run: docker container inspect multinode-119221 --format={{.State.Status}}
	I1006 15:29:40.745374 1006065 status.go:371] multinode-119221 host status = "Stopped" (err=<nil>)
	I1006 15:29:40.745398 1006065 status.go:384] host is not running, skipping remaining checks
	I1006 15:29:40.745406 1006065 status.go:176] multinode-119221 status: &{Name:multinode-119221 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1006 15:29:40.745448 1006065 status.go:174] checking status of multinode-119221-m02 ...
	I1006 15:29:40.745753 1006065 cli_runner.go:164] Run: docker container inspect multinode-119221-m02 --format={{.State.Status}}
	I1006 15:29:40.767151 1006065 status.go:371] multinode-119221-m02 host status = "Stopped" (err=<nil>)
	I1006 15:29:40.767177 1006065 status.go:384] host is not running, skipping remaining checks
	I1006 15:29:40.767184 1006065 status.go:176] multinode-119221-m02 status: &{Name:multinode-119221-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-119221 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-119221 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (55.935128447s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-119221 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-119221
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-119221-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-119221-m02 --driver=docker  --container-runtime=docker: exit status 14 (96.332743ms)

                                                
                                                
-- stdout --
	* [multinode-119221-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-119221-m02' is duplicated with machine name 'multinode-119221-m02' in profile 'multinode-119221'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-119221-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-119221-m03 --driver=docker  --container-runtime=docker: (35.802517738s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-119221
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-119221: exit status 80 (340.287226ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-119221 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-119221-m03 already exists in multinode-119221-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-119221-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-119221-m03: (2.098595618s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.39s)

                                                
                                    
x
+
TestPreload (175.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-358579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-358579 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (1m14.565618693s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-358579 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-358579 image pull gcr.io/k8s-minikube/busybox: (2.370674948s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-358579
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-358579: (10.971566134s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-358579 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1006 15:33:27.932251  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-358579 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (1m24.589749981s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-358579 image list
helpers_test.go:175: Cleaning up "test-preload-358579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-358579
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-358579: (2.295009423s)
--- PASS: TestPreload (175.02s)

                                                
                                    
x
+
TestScheduledStopUnix (109.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-656690 --memory=3072 --driver=docker  --container-runtime=docker
E1006 15:34:26.572713  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-656690 --memory=3072 --driver=docker  --container-runtime=docker: (36.200201105s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-656690 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-656690 -n scheduled-stop-656690
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-656690 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1006 15:34:51.850303  805351 retry.go:31] will retry after 98.892µs: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.851449  805351 retry.go:31] will retry after 216.764µs: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.852580  805351 retry.go:31] will retry after 221.676µs: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.853714  805351 retry.go:31] will retry after 234.079µs: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.854863  805351 retry.go:31] will retry after 291.659µs: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.855960  805351 retry.go:31] will retry after 704.594µs: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.857080  805351 retry.go:31] will retry after 1.184991ms: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.859249  805351 retry.go:31] will retry after 2.231411ms: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.862470  805351 retry.go:31] will retry after 2.618937ms: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.865732  805351 retry.go:31] will retry after 4.272751ms: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.870975  805351 retry.go:31] will retry after 3.550868ms: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.875228  805351 retry.go:31] will retry after 5.541828ms: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.881464  805351 retry.go:31] will retry after 15.654486ms: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.897681  805351 retry.go:31] will retry after 28.356516ms: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
I1006 15:34:51.926623  805351 retry.go:31] will retry after 21.738171ms: open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/scheduled-stop-656690/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-656690 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-656690 -n scheduled-stop-656690
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-656690
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-656690 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-656690
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-656690: exit status 7 (71.013642ms)

                                                
                                                
-- stdout --
	scheduled-stop-656690
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-656690 -n scheduled-stop-656690
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-656690 -n scheduled-stop-656690: exit status 7 (67.539861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-656690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-656690
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-656690: (1.67045979s)
--- PASS: TestScheduledStopUnix (109.52s)

                                                
                                    
x
+
TestSkaffold (146.24s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1864810934 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-317156 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-317156 --memory=3072 --driver=docker  --container-runtime=docker: (36.185159646s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1864810934 run --minikube-profile skaffold-317156 --kube-context skaffold-317156 --status-check=true --port-forward=false --interactive=false
E1006 15:38:11.008417  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1864810934 run --minikube-profile skaffold-317156 --kube-context skaffold-317156 --status-check=true --port-forward=false --interactive=false: (1m32.579165629s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-67b6598f95-fgpg7" [72cfcf1e-c4f0-4935-9db2-fd776fd8b256] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.002913237s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-6c6dfbb696-8spzp" [5ad514a9-a888-45d0-b83d-508b574ede65] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.002961915s
helpers_test.go:175: Cleaning up "skaffold-317156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-317156
E1006 15:38:27.932003  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-317156: (3.044035796s)
--- PASS: TestSkaffold (146.24s)

                                                
                                    
x
+
TestInsufficientStorage (14.17s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-928961 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-928961 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.869687601s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e7e7c9d-5a56-4483-b660-a081ed19255c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-928961] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f83b0678-ce34-4be3-9776-e0837955ad75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21701"}}
	{"specversion":"1.0","id":"8115fcff-88f7-45c5-8a45-2f9c8bbcc8fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"48f0be7a-0f75-42a6-b8e2-a97a887c6800","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig"}}
	{"specversion":"1.0","id":"fb3104d7-67bc-44b0-9867-8bb4742a1cb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube"}}
	{"specversion":"1.0","id":"32ee587e-9676-4316-a7a2-fae269c06826","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8fa9125e-1960-461d-8710-3af1c8c2c4de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f285f6f9-cf39-45b0-bc5e-ef2f2749c0f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"270e8b37-4301-414a-8895-b15ee1e5636d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"62c7413a-d61d-46fd-bf7c-eb6eec69bbe4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cba2675-fadf-442d-be55-0bd176a23e7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9d5cf783-e6a8-4a49-803c-12c1baa0e1f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-928961\" primary control-plane node in \"insufficient-storage-928961\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"dcb85870-c081-48ce-a13f-7d5c39d5db97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"52bce00f-5570-47ff-8ed1-3b1033738e9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2f95edf9-d6f0-453b-a920-8b9f46f601c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-928961 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-928961 --output=json --layout=cluster: exit status 7 (302.532803ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-928961","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-928961","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 15:38:42.959430 1040072 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-928961" does not appear in /home/jenkins/minikube-integration/21701-803497/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-928961 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-928961 --output=json --layout=cluster: exit status 7 (298.251685ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-928961","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-928961","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1006 15:38:43.259833 1040138 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-928961" does not appear in /home/jenkins/minikube-integration/21701-803497/kubeconfig
	E1006 15:38:43.269889 1040138 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/insufficient-storage-928961/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-928961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-928961
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-928961: (1.698017862s)
--- PASS: TestInsufficientStorage (14.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.685677844 start -p running-upgrade-777232 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1006 15:46:00.589711  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.685677844 start -p running-upgrade-777232 --memory=3072 --vm-driver=docker  --container-runtime=docker: (42.750485758s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-777232 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-777232 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.210142318s)
helpers_test.go:175: Cleaning up "running-upgrade-777232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-777232
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-777232: (2.425752168s)
--- PASS: TestRunningBinaryUpgrade (82.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (137.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-469469 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-469469 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (50.348131771s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-469469
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-469469: (2.092760357s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-469469 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-469469 status --format={{.Host}}: exit status 7 (160.155847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-469469 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-469469 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.207280899s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-469469 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-469469 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-469469 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (134.665132ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-469469] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-469469
	    minikube start -p kubernetes-upgrade-469469 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4694692 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-469469 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-469469 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-469469 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.739302708s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-469469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-469469
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-469469: (2.781916893s)
--- PASS: TestKubernetesUpgrade (137.60s)

                                                
                                    
x
+
TestMissingContainerUpgrade (89.87s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2670657093 start -p missing-upgrade-589886 --memory=3072 --driver=docker  --container-runtime=docker
E1006 15:44:26.567787  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2670657093 start -p missing-upgrade-589886 --memory=3072 --driver=docker  --container-runtime=docker: (33.445832949s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-589886
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-589886: (1.66526784s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-589886
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-589886 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1006 15:44:38.668372  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-589886 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (49.309545904s)
helpers_test.go:175: Cleaning up "missing-upgrade-589886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-589886
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-589886: (3.800722567s)
--- PASS: TestMissingContainerUpgrade (89.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-341669 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-341669 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (168.831022ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-341669] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21701
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21701-803497/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21701-803497/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-341669 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1006 15:39:26.566169  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-341669 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (47.460892097s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-341669 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-341669 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-341669 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (16.728228116s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-341669 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-341669 status -o json: exit status 2 (330.825156ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-341669","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-341669
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-341669: (1.732276286s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-341669 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-341669 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (10.386457133s)
--- PASS: TestNoKubernetes/serial/Start (10.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-341669 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-341669 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.01558ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-341669
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-341669: (1.232352726s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-341669 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-341669 --driver=docker  --container-runtime=docker: (8.326784833s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-341669 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-341669 "sudo systemctl is-active --quiet service kubelet": exit status 1 (356.251328ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (79.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2685304202 start -p stopped-upgrade-579743 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1006 15:43:16.730650  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:16.737034  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:16.748447  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:16.769829  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:16.811213  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:16.892623  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:17.054089  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:17.375766  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:18.017598  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:19.299166  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:21.860747  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:26.982806  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:43:27.931836  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2685304202 start -p stopped-upgrade-579743 --memory=3072 --vm-driver=docker  --container-runtime=docker: (55.274714077s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2685304202 -p stopped-upgrade-579743 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2685304202 -p stopped-upgrade-579743 stop: (1.964576594s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-579743 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1006 15:43:37.224122  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-579743 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.788594097s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (79.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-579743
E1006 15:43:57.706066  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-579743: (1.195452229s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestPause/serial/Start (82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-676484 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-676484 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m21.996426829s)
--- PASS: TestPause/serial/Start (82.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E1006 15:48:16.730979  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m20.291492376s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-676484 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1006 15:48:27.932436  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:48:44.431383  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-676484 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.246618724s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (52.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-389345 "pgrep -a kubelet"
I1006 15:49:09.307273  805351 config.go:182] Loaded profile config "auto-389345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-389345 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nmv58" [579dd989-7759-424e-b950-37fd93dd7eae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nmv58" [579dd989-7759-424e-b950-37fd93dd7eae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003708035s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-389345 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                    
x
+
TestPause/serial/Pause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-676484 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-676484 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-676484 --output=json --layout=cluster: exit status 2 (351.606516ms)

                                                
                                                
-- stdout --
	{"Name":"pause-676484","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-676484","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-676484 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-676484 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-676484 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-676484 --alsologtostderr -v=5: (2.22792501s)
--- PASS: TestPause/serial/DeletePaused (2.23s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-676484
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-676484: exit status 1 (19.653747ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-676484: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E1006 15:49:26.566627  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m8.697146217s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m9.590132538s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-sv2m4" [ac953980-8161-4832-aec7-1fbab1209fc6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004326993s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-389345 "pgrep -a kubelet"
I1006 15:50:39.608848  805351 config.go:182] Loaded profile config "kindnet-389345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-389345 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jdlqq" [47ee33ac-5910-49e0-974e-9357f20bb6d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jdlqq" [47ee33ac-5910-49e0-974e-9357f20bb6d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004130558s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-389345 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-8twqf" [b16064c1-52e6-4f3c-af24-2715ef25a276] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.024548465s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-389345 "pgrep -a kubelet"
I1006 15:50:59.928455  805351 config.go:182] Loaded profile config "calico-389345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-389345 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j7s2p" [989613a8-0d2f-4a0e-a754-9a84e019a10b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j7s2p" [989613a8-0d2f-4a0e-a754-9a84e019a10b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006662918s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-389345 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m0.765401043s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (85.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m25.709414602s)
--- PASS: TestNetworkPlugins/group/false/Start (85.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-389345 "pgrep -a kubelet"
I1006 15:52:16.879503  805351 config.go:182] Loaded profile config "custom-flannel-389345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-389345 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tjz5z" [12a568f3-0707-4616-9387-544e70927330] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tjz5z" [12a568f3-0707-4616-9387-544e70927330] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.007608093s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-389345 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m18.889802898s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-389345 "pgrep -a kubelet"
I1006 15:53:06.411877  805351 config.go:182] Loaded profile config "false-389345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-389345 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qv577" [ba2a6ed6-b434-4ede-b941-a22b84f752bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qv577" [ba2a6ed6-b434-4ede-b941-a22b84f752bd] Running
E1006 15:53:16.730693  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004426171s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-389345 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (53.814341203s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-389345 "pgrep -a kubelet"
E1006 15:54:09.573967  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:54:09.580536  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:54:09.592431  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:54:09.613750  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:54:09.655747  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:54:09.737171  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1006 15:54:09.824299  805351 config.go:182] Loaded profile config "enable-default-cni-389345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-389345 replace --force -f testdata/netcat-deployment.yaml
E1006 15:54:09.899041  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:54:10.220276  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xrgqp" [d4dbab32-cf86-4cc3-a238-02b927a3e501] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1006 15:54:10.861587  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:54:12.142903  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:54:14.704972  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-xrgqp" [d4dbab32-cf86-4cc3-a238-02b927a3e501] Running
E1006 15:54:19.826988  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003294296s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-389345 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-fvtx8" [1d4ae604-8513-4b1e-9d8a-0d6a4a74d742] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004231352s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-389345 "pgrep -a kubelet"
I1006 15:54:42.275194  805351 config.go:182] Loaded profile config "flannel-389345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-389345 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6cbsj" [904a5ea4-bb11-4c9e-b481-ff73e36af8f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6cbsj" [904a5ea4-bb11-4c9e-b481-ff73e36af8f4] Running
E1006 15:54:50.550443  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:54:51.010315  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004335714s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m21.373042614s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-389345 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (74.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E1006 15:55:31.512705  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:33.134397  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:33.141124  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:33.153498  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:33.175797  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:33.217360  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:33.298798  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:33.460074  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:33.781931  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:34.424088  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:35.705573  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:38.267369  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:43.389440  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:53.630797  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:54.457336  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:54.463969  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:54.475448  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:54.496891  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:54.538245  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:54.619619  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:54.780974  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:55.102486  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:55.743992  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:57.026020  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:55:59.587776  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:56:04.709902  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-389345 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m14.810831517s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (74.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-389345 "pgrep -a kubelet"
I1006 15:56:06.069506  805351 config.go:182] Loaded profile config "bridge-389345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-389345 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qf7w6" [b76d046a-be95-428c-b36e-7042ae3cdba8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qf7w6" [b76d046a-be95-428c-b36e-7042ae3cdba8] Running
E1006 15:56:14.112119  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:56:14.951750  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003354526s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-389345 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-389345 "pgrep -a kubelet"
I1006 15:56:36.539139  805351 config.go:182] Loaded profile config "kubenet-389345": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-389345 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-plpcd" [a5f8919a-183b-4669-a5f5-e3d83dff1cbf] Pending
helpers_test.go:352: "netcat-cd4db9dbf-plpcd" [a5f8919a-183b-4669-a5f5-e3d83dff1cbf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.003924884s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (94.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-558736 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-558736 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m34.900107723s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (94.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-389345 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-389345 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.22s)
E1006 16:03:06.718091  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (92.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-059123 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1006 15:57:16.395935  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:17.306918  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:17.313250  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:17.324603  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:17.345966  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:17.387306  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:17.469724  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:17.631471  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:17.953407  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:18.595349  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:19.877457  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:22.439260  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:27.560891  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:37.802348  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:57:58.283629  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:06.718410  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:06.724792  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:06.736314  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:06.757800  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:06.799313  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:06.880768  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:07.042218  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:07.364423  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:08.005726  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:09.289166  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:11.851008  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-059123 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m32.635861008s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (92.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-558736 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [007e124c-b6e3-4dc1-ab7d-10737025cb46] Pending
helpers_test.go:352: "busybox" [007e124c-b6e3-4dc1-ab7d-10737025cb46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1006 15:58:16.730081  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [007e124c-b6e3-4dc1-ab7d-10737025cb46] Running
E1006 15:58:16.972550  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:16.995968  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004089399s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-558736 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-558736 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-558736 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.059604288s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-558736 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-558736 --alsologtostderr -v=3
E1006 15:58:27.214503  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:27.932462  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-558736 --alsologtostderr -v=3: (11.104310217s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-558736 -n old-k8s-version-558736
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-558736 -n old-k8s-version-558736: exit status 7 (78.614662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-558736 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (60.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-558736 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1006 15:58:38.317793  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:39.245952  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:58:47.696354  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-558736 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (59.52606888s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-558736 -n old-k8s-version-558736
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (60.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-059123 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6ded4f1a-8cae-4f64-a5cd-b9e7e2124e3f] Pending
helpers_test.go:352: "busybox" [6ded4f1a-8cae-4f64-a5cd-b9e7e2124e3f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6ded4f1a-8cae-4f64-a5cd-b9e7e2124e3f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.037114459s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-059123 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-059123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-059123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.046946723s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-059123 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-059123 --alsologtostderr -v=3
E1006 15:59:09.572877  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:09.637256  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:10.240900  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:10.247260  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:10.258682  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:10.280207  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:10.321605  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:10.403081  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:10.564674  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:10.886319  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:11.527994  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:12.809344  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-059123 --alsologtostderr -v=3: (11.142927862s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-059123 -n no-preload-059123
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-059123 -n no-preload-059123: exit status 7 (79.691327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-059123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-059123 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1006 15:59:15.370685  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:20.494109  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:26.566116  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/functional-933184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:28.657957  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:30.736210  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-059123 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (51.680869049s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-059123 -n no-preload-059123
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r2w6q" [d49f7856-6de6-483a-9214-db4c261f100e] Running
E1006 15:59:35.822393  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:35.828815  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:35.840164  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:35.861527  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:35.902981  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:35.984653  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:36.146191  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:36.467970  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:37.109591  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:37.275936  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:38.390996  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 15:59:39.793286  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003392988s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r2w6q" [d49f7856-6de6-483a-9214-db4c261f100e] Running
E1006 15:59:40.952549  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004779593s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-558736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-558736 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-558736 --alsologtostderr -v=1
E1006 15:59:46.074117  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-558736 -n old-k8s-version-558736
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-558736 -n old-k8s-version-558736: exit status 2 (361.555469ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-558736 -n old-k8s-version-558736
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-558736 -n old-k8s-version-558736: exit status 2 (346.301761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-558736 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-558736 -n old-k8s-version-558736
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-558736 -n old-k8s-version-558736
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-802407 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1006 15:59:56.316231  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:00:01.168949  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-802407 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m28.881554699s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-624lz" [e7300c63-7e4c-4efd-a153-fafcc298bcb8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003866631s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-624lz" [e7300c63-7e4c-4efd-a153-fafcc298bcb8] Running
E1006 16:00:16.798590  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003683763s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-059123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-059123 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-059123 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-059123 --alsologtostderr -v=1: (1.016113469s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-059123 -n no-preload-059123
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-059123 -n no-preload-059123: exit status 2 (433.674248ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-059123 -n no-preload-059123
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-059123 -n no-preload-059123: exit status 2 (425.511261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-059123 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-059123 -n no-preload-059123
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-059123 -n no-preload-059123
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-290707 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1006 16:00:32.180861  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:00:33.134077  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:00:50.579840  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:00:54.457634  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:00:57.760466  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:00.837369  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kindnet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:06.330010  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:06.336385  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:06.347867  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:06.369333  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:06.410921  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:06.492401  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:06.654487  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:06.976442  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:07.617906  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:08.899619  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-290707 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (45.829784432s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-290707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1006 16:01:11.461047  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-290707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.208346631s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-290707 --alsologtostderr -v=3
E1006 16:01:16.582759  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-290707 --alsologtostderr -v=3: (9.238375362s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-802407 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a003ec8e-1ff9-47cc-b780-21e23e6403c7] Pending
helpers_test.go:352: "busybox" [a003ec8e-1ff9-47cc-b780-21e23e6403c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1006 16:01:22.159789  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/calico-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [a003ec8e-1ff9-47cc-b780-21e23e6403c7] Running
E1006 16:01:26.825254  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004168357s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-802407 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-290707 -n newest-cni-290707
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-290707 -n newest-cni-290707: exit status 7 (90.647643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-290707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-290707 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-290707 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (19.632852381s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-290707 -n newest-cni-290707
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-802407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-802407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.630862453s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-802407 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-802407 --alsologtostderr -v=3
E1006 16:01:36.875137  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:36.881456  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:36.892778  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:36.914124  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:36.955459  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:37.036797  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:37.198245  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:37.520008  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:38.162133  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:39.444055  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-802407 --alsologtostderr -v=3: (11.818740551s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-290707 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-290707 --alsologtostderr -v=1
E1006 16:01:42.006688  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-290707 -n newest-cni-290707
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-290707 -n newest-cni-290707: exit status 2 (331.306022ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-290707 -n newest-cni-290707
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-290707 -n newest-cni-290707: exit status 2 (339.22981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-290707 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-290707 -n newest-cni-290707
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-290707 -n newest-cni-290707
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-802407 -n embed-certs-802407
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-802407 -n embed-certs-802407: exit status 7 (197.053212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-802407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (57.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-802407 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1006 16:01:47.130177  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:47.306657  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-802407 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (57.62138368s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-802407 -n embed-certs-802407
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (57.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-155028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1006 16:01:54.102960  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:01:57.371419  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:02:17.306884  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:02:17.853423  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/kubenet-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:02:19.682014  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:02:28.268066  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-155028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (59.545322767s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fxd5z" [29f11ef1-db8b-4dd8-baee-05dea66e69f0] Running
E1006 16:02:45.022135  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/custom-flannel-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003981154s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-155028 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2040bfff-40b0-4d84-89ed-2809bbbbd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2040bfff-40b0-4d84-89ed-2809bbbbd9b6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004735047s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-155028 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fxd5z" [29f11ef1-db8b-4dd8-baee-05dea66e69f0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004294937s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-802407 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-802407 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-802407 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-802407 -n embed-certs-802407
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-802407 -n embed-certs-802407: exit status 2 (337.13857ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-802407 -n embed-certs-802407
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-802407 -n embed-certs-802407: exit status 2 (368.262243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-802407 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-802407 -n embed-certs-802407
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-802407 -n embed-certs-802407
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-155028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-155028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.621928417s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-155028 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-155028 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-155028 --alsologtostderr -v=3: (11.356469512s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-155028 -n default-k8s-diff-port-155028
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-155028 -n default-k8s-diff-port-155028: exit status 7 (74.967749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-155028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-155028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1006 16:03:12.798221  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:12.804486  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:12.815830  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:12.837284  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:12.878666  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:12.960014  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:13.121445  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:13.443287  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:14.085436  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:15.366923  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:16.730373  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/skaffold-317156/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:17.928362  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:23.050035  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:27.931544  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/addons-006450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:33.292261  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:34.422023  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/false-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:49.145742  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:49.152241  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:49.163770  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:49.185231  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:49.226599  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:49.307997  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:49.469476  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:49.791690  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:50.190429  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/bridge-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:50.433084  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:51.714544  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:53.774800  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/old-k8s-version-558736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:54.276517  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:03:59.398679  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-155028 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (50.824004739s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-155028 -n default-k8s-diff-port-155028
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-66tzr" [b8cceff1-20ed-4638-b128-46a5f8d0eaf2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002867721s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-66tzr" [b8cceff1-20ed-4638-b128-46a5f8d0eaf2] Running
E1006 16:04:09.572328  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/auto-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:04:09.640904  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/no-preload-059123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1006 16:04:10.240223  805351 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21701-803497/.minikube/profiles/enable-default-cni-389345/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002916729s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-155028 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-155028 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-155028 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-155028 -n default-k8s-diff-port-155028
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-155028 -n default-k8s-diff-port-155028: exit status 2 (338.63828ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-155028 -n default-k8s-diff-port-155028
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-155028 -n default-k8s-diff-port-155028: exit status 2 (315.940814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-155028 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-155028 -n default-k8s-diff-port-155028
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-155028 -n default-k8s-diff-port-155028
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.98s)

                                                
                                    

Test skip (26/347)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.61s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-403886 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-403886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-403886
--- SKIP: TestDownloadOnlyKic (0.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-389345 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-389345" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-389345

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-389345" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-389345"

                                                
                                                
----------------------- debugLogs end: cilium-389345 [took: 4.879986158s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-389345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-389345
--- SKIP: TestNetworkPlugins/group/cilium (5.11s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-548212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-548212
--- SKIP: TestStartStop/group/disable-driver-mounts (0.27s)

                                                
                                    
Copied to clipboard