Test Report: Docker_Linux_crio_arm64 21504

                    
                      3892f90e7d746f1b37c491f3707229f264f0f5da:2025-09-08:41335
                    
                

Test fail (11/331)

x
+
TestAddons/parallel/Ingress (493.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-242829 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-242829 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-242829 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9e4cf025-f78d-404a-99ec-0372996d1d4e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-242829 -n addons-242829
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-09-08 13:53:42.27902482 +0000 UTC m=+820.554276713
addons_test.go:252: (dbg) Run:  kubectl --context addons-242829 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-242829 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-242829/192.168.49.2
Start Time:       Mon, 08 Sep 2025 13:45:41 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kp6lf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kp6lf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m1s                 default-scheduler  Successfully assigned default/nginx to addons-242829
Warning  Failed     5m31s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    94s (x5 over 8m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     64s (x4 over 7m2s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     64s (x5 over 7m2s)   kubelet            Error: ErrImagePull
Normal   BackOff    12s (x14 over 7m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     12s (x14 over 7m1s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-242829 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-242829 logs nginx -n default: exit status 1 (104.881183ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-242829 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-242829
helpers_test.go:243: (dbg) docker inspect addons-242829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21",
	        "Created": "2025-09-08T13:40:42.8963652Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 708440,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:40:42.965703856Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21/hostname",
	        "HostsPath": "/var/lib/docker/containers/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21/hosts",
	        "LogPath": "/var/lib/docker/containers/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21-json.log",
	        "Name": "/addons-242829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-242829:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-242829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21",
	                "LowerDir": "/var/lib/docker/overlay2/aca94e5e464b3c3166a1c4b37d3516470c5d694c60e9ecd6fc7d40661c35b53a-init/diff:/var/lib/docker/overlay2/e952efc7b05509e28dc878e8f391bd641c7ca24b818c7ca2dd0e02dcca8c51fa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aca94e5e464b3c3166a1c4b37d3516470c5d694c60e9ecd6fc7d40661c35b53a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aca94e5e464b3c3166a1c4b37d3516470c5d694c60e9ecd6fc7d40661c35b53a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aca94e5e464b3c3166a1c4b37d3516470c5d694c60e9ecd6fc7d40661c35b53a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-242829",
	                "Source": "/var/lib/docker/volumes/addons-242829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-242829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-242829",
	                "name.minikube.sigs.k8s.io": "addons-242829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bf24bd76272dae654cf1fd6dab3567c143204d67c70e508f804bdc80ddb0517",
	            "SandboxKey": "/var/run/docker/netns/9bf24bd76272",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-242829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:29:e5:cf:87:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6d881d6faaefe4fae832a05ea66173e3cd8809d0e3fbd480724f018538b9c522",
	                    "EndpointID": "b31124decc38e71bd3c27193d8195834d2994d82cbeecde2a81225a417dd9f52",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-242829",
	                        "3fa9656c3a03"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-242829 -n addons-242829
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-242829 logs -n 25: (1.478322565s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-445998                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-445998   │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ start   │ --download-only -p download-docker-833805 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-833805 │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │                     │
	│ delete  │ -p download-docker-833805                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-833805 │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ start   │ --download-only -p binary-mirror-025043 --alsologtostderr --binary-mirror http://127.0.0.1:39871 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-025043   │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │                     │
	│ delete  │ -p binary-mirror-025043                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-025043   │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ addons  │ enable dashboard -p addons-242829                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │                     │
	│ addons  │ disable dashboard -p addons-242829                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │                     │
	│ start   │ -p addons-242829 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:43 UTC │
	│ addons  │ addons-242829 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:43 UTC │ 08 Sep 25 13:43 UTC │
	│ addons  │ addons-242829 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:43 UTC │ 08 Sep 25 13:43 UTC │
	│ addons  │ enable headlamp -p addons-242829 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:43 UTC │ 08 Sep 25 13:43 UTC │
	│ ip      │ addons-242829 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ ssh     │ addons-242829 ssh cat /opt/local-path-provisioner/pvc-2fff7e83-b238-4284-875d-267426c27321_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:45 UTC │
	│ addons  │ addons-242829 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:45 UTC │ 08 Sep 25 13:45 UTC │
	│ addons  │ addons-242829 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:51 UTC │ 08 Sep 25 13:51 UTC │
	│ addons  │ addons-242829 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:51 UTC │ 08 Sep 25 13:51 UTC │
	│ addons  │ addons-242829 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:51 UTC │ 08 Sep 25 13:51 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-242829                                                                                                                                                                                                                                                                                                                                                                                           │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:51 UTC │ 08 Sep 25 13:51 UTC │
	│ addons  │ addons-242829 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:51 UTC │ 08 Sep 25 13:51 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:40:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:40:17.611835  708040 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:40:17.612055  708040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:40:17.612105  708040 out.go:374] Setting ErrFile to fd 2...
	I0908 13:40:17.612130  708040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:40:17.612420  708040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 13:40:17.612888  708040 out.go:368] Setting JSON to false
	I0908 13:40:17.613741  708040 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12168,"bootTime":1757326650,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 13:40:17.613840  708040 start.go:140] virtualization:  
	I0908 13:40:17.617183  708040 out.go:179] * [addons-242829] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:40:17.620176  708040 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 13:40:17.620263  708040 notify.go:220] Checking for updates...
	I0908 13:40:17.625846  708040 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:40:17.628768  708040 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 13:40:17.631614  708040 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	I0908 13:40:17.634403  708040 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:40:17.637281  708040 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:40:17.640400  708040 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:40:17.662527  708040 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:40:17.662638  708040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:40:17.726885  708040 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-08 13:40:17.717676728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:40:17.726984  708040 docker.go:318] overlay module found
	I0908 13:40:17.730066  708040 out.go:179] * Using the docker driver based on user configuration
	I0908 13:40:17.732841  708040 start.go:304] selected driver: docker
	I0908 13:40:17.732863  708040 start.go:918] validating driver "docker" against <nil>
	I0908 13:40:17.732877  708040 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:40:17.733612  708040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:40:17.796743  708040 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-08 13:40:17.787818972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:40:17.796912  708040 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:40:17.797152  708040 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:40:17.800153  708040 out.go:179] * Using Docker driver with root privileges
	I0908 13:40:17.802944  708040 cni.go:84] Creating CNI manager for ""
	I0908 13:40:17.803023  708040 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:40:17.803033  708040 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 13:40:17.803111  708040 start.go:348] cluster config:
	{Name:addons-242829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-242829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0908 13:40:17.808034  708040 out.go:179] * Starting "addons-242829" primary control-plane node in "addons-242829" cluster
	I0908 13:40:17.810905  708040 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 13:40:17.813802  708040 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:40:17.816589  708040 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:40:17.816660  708040 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:40:17.816691  708040 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 13:40:17.816703  708040 cache.go:58] Caching tarball of preloaded images
	I0908 13:40:17.816781  708040 preload.go:172] Found /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0908 13:40:17.816794  708040 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 13:40:17.817138  708040 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/config.json ...
	I0908 13:40:17.817165  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/config.json: {Name:mk56726e796e1bf536ba1733dd837574e12d14da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:17.832827  708040 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:40:17.832978  708040 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 13:40:17.833003  708040 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 13:40:17.833013  708040 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 13:40:17.833021  708040 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 13:40:17.833032  708040 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from local cache
	I0908 13:40:35.618104  708040 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from cached tarball
	I0908 13:40:35.618146  708040 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:40:35.618186  708040 start.go:360] acquireMachinesLock for addons-242829: {Name:mk34f8a98d9e1dfddf6985bfb6f0c7f44da9593d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:40:35.619048  708040 start.go:364] duration metric: took 830.478µs to acquireMachinesLock for "addons-242829"
	I0908 13:40:35.619098  708040 start.go:93] Provisioning new machine with config: &{Name:addons-242829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-242829 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 13:40:35.619176  708040 start.go:125] createHost starting for "" (driver="docker")
	I0908 13:40:35.622506  708040 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0908 13:40:35.622755  708040 start.go:159] libmachine.API.Create for "addons-242829" (driver="docker")
	I0908 13:40:35.622800  708040 client.go:168] LocalClient.Create starting
	I0908 13:40:35.622921  708040 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem
	I0908 13:40:36.058437  708040 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem
	I0908 13:40:36.334678  708040 cli_runner.go:164] Run: docker network inspect addons-242829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 13:40:36.351083  708040 cli_runner.go:211] docker network inspect addons-242829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 13:40:36.351181  708040 network_create.go:284] running [docker network inspect addons-242829] to gather additional debugging logs...
	I0908 13:40:36.351204  708040 cli_runner.go:164] Run: docker network inspect addons-242829
	W0908 13:40:36.368601  708040 cli_runner.go:211] docker network inspect addons-242829 returned with exit code 1
	I0908 13:40:36.368636  708040 network_create.go:287] error running [docker network inspect addons-242829]: docker network inspect addons-242829: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-242829 not found
	I0908 13:40:36.368650  708040 network_create.go:289] output of [docker network inspect addons-242829]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-242829 not found
	
	** /stderr **
	I0908 13:40:36.368793  708040 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:40:36.385516  708040 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400194de40}
	I0908 13:40:36.385567  708040 network_create.go:124] attempt to create docker network addons-242829 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0908 13:40:36.385629  708040 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-242829 addons-242829
	I0908 13:40:36.445674  708040 network_create.go:108] docker network addons-242829 192.168.49.0/24 created
	I0908 13:40:36.445712  708040 kic.go:121] calculated static IP "192.168.49.2" for the "addons-242829" container
	I0908 13:40:36.445807  708040 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 13:40:36.464418  708040 cli_runner.go:164] Run: docker volume create addons-242829 --label name.minikube.sigs.k8s.io=addons-242829 --label created_by.minikube.sigs.k8s.io=true
	I0908 13:40:36.482924  708040 oci.go:103] Successfully created a docker volume addons-242829
	I0908 13:40:36.483016  708040 cli_runner.go:164] Run: docker run --rm --name addons-242829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-242829 --entrypoint /usr/bin/test -v addons-242829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 13:40:38.605068  708040 cli_runner.go:217] Completed: docker run --rm --name addons-242829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-242829 --entrypoint /usr/bin/test -v addons-242829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib: (2.121993809s)
	I0908 13:40:38.605099  708040 oci.go:107] Successfully prepared a docker volume addons-242829
	I0908 13:40:38.605128  708040 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:40:38.605147  708040 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 13:40:38.605212  708040 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-242829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 13:40:42.828113  708040 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-242829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.222833461s)
	I0908 13:40:42.828144  708040 kic.go:203] duration metric: took 4.222992823s to extract preloaded images to volume ...
	W0908 13:40:42.828292  708040 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 13:40:42.828396  708040 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 13:40:42.881763  708040 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-242829 --name addons-242829 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-242829 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-242829 --network addons-242829 --ip 192.168.49.2 --volume addons-242829:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 13:40:43.195827  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Running}}
	I0908 13:40:43.217647  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:40:43.244837  708040 cli_runner.go:164] Run: docker exec addons-242829 stat /var/lib/dpkg/alternatives/iptables
	I0908 13:40:43.299790  708040 oci.go:144] the created container "addons-242829" has a running status.
	I0908 13:40:43.299817  708040 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa...
	I0908 13:40:43.632861  708040 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 13:40:43.662326  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:40:43.684683  708040 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 13:40:43.684702  708040 kic_runner.go:114] Args: [docker exec --privileged addons-242829 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 13:40:43.747014  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:40:43.775976  708040 machine.go:93] provisionDockerMachine start ...
	I0908 13:40:43.776094  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:43.803855  708040 main.go:141] libmachine: Using SSH client type: native
	I0908 13:40:43.804254  708040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0908 13:40:43.804269  708040 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:40:43.976484  708040 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-242829
	
	I0908 13:40:43.976549  708040 ubuntu.go:182] provisioning hostname "addons-242829"
	I0908 13:40:43.976634  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:44.002030  708040 main.go:141] libmachine: Using SSH client type: native
	I0908 13:40:44.002363  708040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0908 13:40:44.002376  708040 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-242829 && echo "addons-242829" | sudo tee /etc/hostname
	I0908 13:40:44.146344  708040 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-242829
	
	I0908 13:40:44.146493  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:44.164921  708040 main.go:141] libmachine: Using SSH client type: native
	I0908 13:40:44.165316  708040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0908 13:40:44.165341  708040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-242829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-242829/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-242829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:40:44.298058  708040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:40:44.298126  708040 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21504-705422/.minikube CaCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21504-705422/.minikube}
	I0908 13:40:44.298163  708040 ubuntu.go:190] setting up certificates
	I0908 13:40:44.298208  708040 provision.go:84] configureAuth start
	I0908 13:40:44.298291  708040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-242829
	I0908 13:40:44.316028  708040 provision.go:143] copyHostCerts
	I0908 13:40:44.316201  708040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/key.pem (1679 bytes)
	I0908 13:40:44.316352  708040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/ca.pem (1078 bytes)
	I0908 13:40:44.316424  708040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/cert.pem (1123 bytes)
	I0908 13:40:44.316470  708040 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem org=jenkins.addons-242829 san=[127.0.0.1 192.168.49.2 addons-242829 localhost minikube]
	I0908 13:40:44.470229  708040 provision.go:177] copyRemoteCerts
	I0908 13:40:44.470296  708040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:40:44.470346  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:44.491126  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:40:44.581210  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 13:40:44.604748  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 13:40:44.628212  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 13:40:44.652557  708040 provision.go:87] duration metric: took 354.306456ms to configureAuth
	I0908 13:40:44.652586  708040 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:40:44.652763  708040 config.go:182] Loaded profile config "addons-242829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:40:44.652870  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:44.669631  708040 main.go:141] libmachine: Using SSH client type: native
	I0908 13:40:44.670033  708040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0908 13:40:44.670057  708040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 13:40:44.903008  708040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 13:40:44.903031  708040 machine.go:96] duration metric: took 1.127035132s to provisionDockerMachine
	I0908 13:40:44.903041  708040 client.go:171] duration metric: took 9.280229822s to LocalClient.Create
	I0908 13:40:44.903063  708040 start.go:167] duration metric: took 9.280309568s to libmachine.API.Create "addons-242829"
	I0908 13:40:44.903073  708040 start.go:293] postStartSetup for "addons-242829" (driver="docker")
	I0908 13:40:44.903087  708040 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:40:44.903152  708040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:40:44.903197  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:44.921269  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:40:45.035812  708040 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:40:45.056825  708040 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:40:45.056861  708040 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:40:45.056872  708040 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:40:45.056880  708040 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:40:45.056900  708040 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-705422/.minikube/addons for local assets ...
	I0908 13:40:45.056999  708040 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-705422/.minikube/files for local assets ...
	I0908 13:40:45.057025  708040 start.go:296] duration metric: took 153.94671ms for postStartSetup
	I0908 13:40:45.057469  708040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-242829
	I0908 13:40:45.085826  708040 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/config.json ...
	I0908 13:40:45.086193  708040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:40:45.086261  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:45.109288  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:40:45.210165  708040 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:40:45.215815  708040 start.go:128] duration metric: took 9.596620623s to createHost
	I0908 13:40:45.215852  708040 start.go:83] releasing machines lock for "addons-242829", held for 9.596774807s
	I0908 13:40:45.215938  708040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-242829
	I0908 13:40:45.236155  708040 ssh_runner.go:195] Run: cat /version.json
	I0908 13:40:45.236216  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:45.236537  708040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:40:45.236628  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:45.263280  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:40:45.276048  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:40:45.491979  708040 ssh_runner.go:195] Run: systemctl --version
	I0908 13:40:45.496235  708040 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 13:40:45.638651  708040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:40:45.642903  708040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:40:45.665039  708040 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:40:45.665113  708040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:40:45.700243  708040 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 13:40:45.700272  708040 start.go:495] detecting cgroup driver to use...
	I0908 13:40:45.700334  708040 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:40:45.700400  708040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 13:40:45.716691  708040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 13:40:45.728648  708040 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:40:45.728755  708040 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:40:45.743811  708040 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:40:45.758626  708040 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:40:45.849328  708040 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:40:45.955368  708040 docker.go:234] disabling docker service ...
	I0908 13:40:45.955523  708040 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:40:45.976324  708040 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:40:45.988475  708040 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:40:46.080099  708040 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:40:46.185763  708040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:40:46.198562  708040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:40:46.214900  708040 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 13:40:46.215024  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.225669  708040 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 13:40:46.225742  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.235946  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.246450  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.257079  708040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:40:46.266506  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.276829  708040 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.293173  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.302879  708040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:40:46.311300  708040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:40:46.319915  708040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:40:46.409083  708040 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 13:40:46.517698  708040 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 13:40:46.517807  708040 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 13:40:46.521577  708040 start.go:563] Will wait 60s for crictl version
	I0908 13:40:46.521644  708040 ssh_runner.go:195] Run: which crictl
	I0908 13:40:46.525077  708040 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:40:46.560834  708040 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 13:40:46.560977  708040 ssh_runner.go:195] Run: crio --version
	I0908 13:40:46.599539  708040 ssh_runner.go:195] Run: crio --version
	I0908 13:40:46.644113  708040 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 13:40:46.646918  708040 cli_runner.go:164] Run: docker network inspect addons-242829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:40:46.665938  708040 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 13:40:46.669670  708040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:40:46.680418  708040 kubeadm.go:875] updating cluster {Name:addons-242829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-242829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:40:46.680542  708040 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:40:46.680604  708040 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:40:46.766745  708040 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 13:40:46.766774  708040 crio.go:433] Images already preloaded, skipping extraction
	I0908 13:40:46.766834  708040 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:40:46.813615  708040 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 13:40:46.813652  708040 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:40:46.813662  708040 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0908 13:40:46.813793  708040 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-242829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-242829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:40:46.813880  708040 ssh_runner.go:195] Run: crio config
	I0908 13:40:46.867613  708040 cni.go:84] Creating CNI manager for ""
	I0908 13:40:46.867639  708040 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:40:46.867649  708040 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:40:46.867673  708040 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-242829 NodeName:addons-242829 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:40:46.867814  708040 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-242829"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:40:46.867888  708040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:40:46.877181  708040 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:40:46.877289  708040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:40:46.886194  708040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0908 13:40:46.904960  708040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:40:46.922667  708040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 13:40:46.940731  708040 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:40:46.944114  708040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:40:46.954925  708040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:40:47.042616  708040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:40:47.056142  708040 certs.go:68] Setting up /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829 for IP: 192.168.49.2
	I0908 13:40:47.056200  708040 certs.go:194] generating shared ca certs ...
	I0908 13:40:47.056231  708040 certs.go:226] acquiring lock for ca certs: {Name:mka1c81276fce9cfa47987803f0bf5dea623834c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:47.056374  708040 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key
	I0908 13:40:47.437346  708040 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt ...
	I0908 13:40:47.437378  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt: {Name:mkd004c7069a772f6ed057c1d4b6ff0d25256d90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:47.438172  708040 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key ...
	I0908 13:40:47.438194  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key: {Name:mkf893a7dc40078ec8ea08a6cd5404c07e3a3fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:47.438865  708040 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key
	I0908 13:40:48.138423  708040 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.crt ...
	I0908 13:40:48.138455  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.crt: {Name:mkc0be6db70b1f098764e40b029d9241a4fe9ee9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:48.139220  708040 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key ...
	I0908 13:40:48.139236  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key: {Name:mk88915ba4aeda971a22e98cbaf73153c48ab75a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:48.139980  708040 certs.go:256] generating profile certs ...
	I0908 13:40:48.140051  708040 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.key
	I0908 13:40:48.140071  708040 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt with IP's: []
	I0908 13:40:49.291632  708040 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt ...
	I0908 13:40:49.291662  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: {Name:mk412c20027f8bcddad3314be6ec3dd40e89c15b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.291884  708040 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.key ...
	I0908 13:40:49.291899  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.key: {Name:mkb7f447e49003768e6456164072912d0911889a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.292569  708040 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key.1861509d
	I0908 13:40:49.292596  708040 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt.1861509d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0908 13:40:49.539881  708040 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt.1861509d ...
	I0908 13:40:49.539915  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt.1861509d: {Name:mk889859f9a587cbb40c3dba76b152e4bd4d2d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.540764  708040 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key.1861509d ...
	I0908 13:40:49.540786  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key.1861509d: {Name:mk861c6128d0c10c3a1580bffb4cc49964d2fd8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.541493  708040 certs.go:381] copying /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt.1861509d -> /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt
	I0908 13:40:49.541577  708040 certs.go:385] copying /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key.1861509d -> /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key
	I0908 13:40:49.541632  708040 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.key
	I0908 13:40:49.541653  708040 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.crt with IP's: []
	I0908 13:40:49.672690  708040 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.crt ...
	I0908 13:40:49.672724  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.crt: {Name:mk50703bc10335ebada97d3990082b193f0643fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.673517  708040 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.key ...
	I0908 13:40:49.673535  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.key: {Name:mk1e1b1f5fcc42e012687c527792308fe1e7f464 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.673745  708040 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 13:40:49.673786  708040 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem (1078 bytes)
	I0908 13:40:49.673815  708040 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:40:49.673846  708040 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem (1679 bytes)
	I0908 13:40:49.674495  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:40:49.698878  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 13:40:49.722454  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:40:49.746171  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 13:40:49.770523  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 13:40:49.795380  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 13:40:49.820549  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:40:49.844517  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 13:40:49.869862  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:40:49.894961  708040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:40:49.913574  708040 ssh_runner.go:195] Run: openssl version
	I0908 13:40:49.919267  708040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:40:49.928598  708040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:40:49.931987  708040 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:40 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:40:49.932052  708040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:40:49.939070  708040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:40:49.948278  708040 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:40:49.951690  708040 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 13:40:49.951739  708040 kubeadm.go:392] StartCluster: {Name:addons-242829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-242829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:40:49.951866  708040 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 13:40:49.951932  708040 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:40:49.990014  708040 cri.go:89] found id: ""
	I0908 13:40:49.990136  708040 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:40:49.999181  708040 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 13:40:50.009900  708040 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 13:40:50.010031  708040 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 13:40:50.021425  708040 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 13:40:50.021447  708040 kubeadm.go:157] found existing configuration files:
	
	I0908 13:40:50.021531  708040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 13:40:50.039224  708040 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 13:40:50.039375  708040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 13:40:50.048991  708040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 13:40:50.058465  708040 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 13:40:50.058619  708040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 13:40:50.067475  708040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 13:40:50.077583  708040 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 13:40:50.077708  708040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 13:40:50.087652  708040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 13:40:50.097934  708040 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 13:40:50.098062  708040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 13:40:50.107435  708040 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 13:40:50.150391  708040 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 13:40:50.150711  708040 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 13:40:50.170520  708040 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 13:40:50.170605  708040 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0908 13:40:50.170655  708040 kubeadm.go:310] OS: Linux
	I0908 13:40:50.170707  708040 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 13:40:50.170761  708040 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 13:40:50.170813  708040 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 13:40:50.170867  708040 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 13:40:50.170920  708040 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 13:40:50.170993  708040 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 13:40:50.171043  708040 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 13:40:50.171097  708040 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 13:40:50.171147  708040 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 13:40:50.233188  708040 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 13:40:50.233324  708040 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 13:40:50.233417  708040 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 13:40:50.240303  708040 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 13:40:50.246601  708040 out.go:252]   - Generating certificates and keys ...
	I0908 13:40:50.246749  708040 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 13:40:50.246850  708040 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 13:40:50.407408  708040 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 13:40:50.819317  708040 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 13:40:51.240389  708040 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 13:40:52.132979  708040 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 13:40:52.538212  708040 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 13:40:52.538606  708040 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-242829 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 13:40:52.796925  708040 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 13:40:52.797158  708040 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-242829 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 13:40:53.651380  708040 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 13:40:54.482009  708040 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 13:40:55.183766  708040 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 13:40:55.184016  708040 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 13:40:56.024563  708040 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 13:40:56.232938  708040 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 13:40:57.004279  708040 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 13:40:57.261656  708040 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 13:40:58.209207  708040 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 13:40:58.209786  708040 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 13:40:58.212396  708040 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 13:40:58.215879  708040 out.go:252]   - Booting up control plane ...
	I0908 13:40:58.216003  708040 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 13:40:58.216108  708040 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 13:40:58.216185  708040 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 13:40:58.232524  708040 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 13:40:58.232636  708040 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 13:40:58.239256  708040 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 13:40:58.239575  708040 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 13:40:58.239802  708040 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 13:40:58.332462  708040 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 13:40:58.332589  708040 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 13:41:00.346376  708040 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.014225444s
	I0908 13:41:00.350985  708040 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 13:41:00.351332  708040 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0908 13:41:00.351611  708040 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 13:41:00.351698  708040 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 13:41:03.625465  708040 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.273828958s
	I0908 13:41:05.735265  708040 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.384285287s
	I0908 13:41:06.853061  708040 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.502058978s
	I0908 13:41:06.882674  708040 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 13:41:06.902706  708040 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 13:41:06.919276  708040 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 13:41:06.919499  708040 kubeadm.go:310] [mark-control-plane] Marking the node addons-242829 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 13:41:06.936685  708040 kubeadm.go:310] [bootstrap-token] Using token: o1zj4i.q3ail6j6yitj6kgp
	I0908 13:41:06.941644  708040 out.go:252]   - Configuring RBAC rules ...
	I0908 13:41:06.941781  708040 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 13:41:06.950215  708040 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 13:41:06.962471  708040 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 13:41:06.966692  708040 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 13:41:06.970792  708040 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 13:41:06.976907  708040 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 13:41:07.261240  708040 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 13:41:07.721919  708040 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 13:41:08.259405  708040 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 13:41:08.262883  708040 kubeadm.go:310] 
	I0908 13:41:08.262965  708040 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 13:41:08.262975  708040 kubeadm.go:310] 
	I0908 13:41:08.263055  708040 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 13:41:08.263064  708040 kubeadm.go:310] 
	I0908 13:41:08.263109  708040 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 13:41:08.263173  708040 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 13:41:08.263229  708040 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 13:41:08.263237  708040 kubeadm.go:310] 
	I0908 13:41:08.263296  708040 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 13:41:08.263308  708040 kubeadm.go:310] 
	I0908 13:41:08.263357  708040 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 13:41:08.263365  708040 kubeadm.go:310] 
	I0908 13:41:08.263417  708040 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 13:41:08.263499  708040 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 13:41:08.263573  708040 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 13:41:08.263581  708040 kubeadm.go:310] 
	I0908 13:41:08.263667  708040 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 13:41:08.263749  708040 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 13:41:08.263758  708040 kubeadm.go:310] 
	I0908 13:41:08.263843  708040 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o1zj4i.q3ail6j6yitj6kgp \
	I0908 13:41:08.263950  708040 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:29a6290ed8ceaba0b394f06d6a346de6983f25cd3a9793386398697a5378db4a \
	I0908 13:41:08.263974  708040 kubeadm.go:310] 	--control-plane 
	I0908 13:41:08.263982  708040 kubeadm.go:310] 
	I0908 13:41:08.264067  708040 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 13:41:08.264091  708040 kubeadm.go:310] 
	I0908 13:41:08.264176  708040 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o1zj4i.q3ail6j6yitj6kgp \
	I0908 13:41:08.264283  708040 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:29a6290ed8ceaba0b394f06d6a346de6983f25cd3a9793386398697a5378db4a 
	I0908 13:41:08.265834  708040 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 13:41:08.266091  708040 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0908 13:41:08.266214  708040 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 13:41:08.266255  708040 cni.go:84] Creating CNI manager for ""
	I0908 13:41:08.266269  708040 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:41:08.269410  708040 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 13:41:08.272312  708040 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 13:41:08.276100  708040 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 13:41:08.276119  708040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 13:41:08.294511  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 13:41:08.578859  708040 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 13:41:08.578990  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:08.579087  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-242829 minikube.k8s.io/updated_at=2025_09_08T13_41_08_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6 minikube.k8s.io/name=addons-242829 minikube.k8s.io/primary=true
	I0908 13:41:08.753662  708040 ops.go:34] apiserver oom_adj: -16
	I0908 13:41:08.753791  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:09.254412  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:09.754009  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:10.253993  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:10.754645  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:11.254654  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:11.754334  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:12.254184  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:12.753949  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:12.855806  708040 kubeadm.go:1105] duration metric: took 4.276857237s to wait for elevateKubeSystemPrivileges
	I0908 13:41:12.855839  708040 kubeadm.go:394] duration metric: took 22.904104396s to StartCluster
	I0908 13:41:12.855856  708040 settings.go:142] acquiring lock: {Name:mk8d04bf7fe8294bcf8a246a4f876d89e8c791a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:12.856035  708040 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 13:41:12.856443  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/kubeconfig: {Name:mke605cd08a8cfc644c26dac74a49ac24bb6558b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:12.856654  708040 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 13:41:12.856756  708040 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 13:41:12.857011  708040 config.go:182] Loaded profile config "addons-242829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:41:12.857046  708040 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 13:41:12.857132  708040 addons.go:69] Setting yakd=true in profile "addons-242829"
	I0908 13:41:12.857150  708040 addons.go:238] Setting addon yakd=true in "addons-242829"
	I0908 13:41:12.857177  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.857662  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.857961  708040 addons.go:69] Setting inspektor-gadget=true in profile "addons-242829"
	I0908 13:41:12.857986  708040 addons.go:238] Setting addon inspektor-gadget=true in "addons-242829"
	I0908 13:41:12.858018  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.858448  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.858816  708040 addons.go:69] Setting metrics-server=true in profile "addons-242829"
	I0908 13:41:12.858843  708040 addons.go:238] Setting addon metrics-server=true in "addons-242829"
	I0908 13:41:12.858866  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.859315  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.859811  708040 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-242829"
	I0908 13:41:12.859836  708040 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-242829"
	I0908 13:41:12.859859  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.860351  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.863276  708040 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-242829"
	I0908 13:41:12.865917  708040 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-242829"
	I0908 13:41:12.866013  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.866065  708040 addons.go:69] Setting registry=true in profile "addons-242829"
	I0908 13:41:12.866078  708040 addons.go:238] Setting addon registry=true in "addons-242829"
	I0908 13:41:12.866124  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.866619  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.868793  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.863452  708040 addons.go:69] Setting cloud-spanner=true in profile "addons-242829"
	I0908 13:41:12.883987  708040 addons.go:238] Setting addon cloud-spanner=true in "addons-242829"
	I0908 13:41:12.884126  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.884687  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.884937  708040 addons.go:69] Setting registry-creds=true in profile "addons-242829"
	I0908 13:41:12.884955  708040 addons.go:238] Setting addon registry-creds=true in "addons-242829"
	I0908 13:41:12.884979  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.885465  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.863461  708040 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-242829"
	I0908 13:41:12.897592  708040 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-242829"
	I0908 13:41:12.897628  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.898094  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.907587  708040 addons.go:69] Setting storage-provisioner=true in profile "addons-242829"
	I0908 13:41:12.863466  708040 addons.go:69] Setting default-storageclass=true in profile "addons-242829"
	I0908 13:41:12.863470  708040 addons.go:69] Setting gcp-auth=true in profile "addons-242829"
	I0908 13:41:12.863473  708040 addons.go:69] Setting ingress=true in profile "addons-242829"
	I0908 13:41:12.863478  708040 addons.go:69] Setting ingress-dns=true in profile "addons-242829"
	I0908 13:41:12.865847  708040 out.go:179] * Verifying Kubernetes components...
	I0908 13:41:12.917116  708040 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-242829"
	I0908 13:41:12.917134  708040 addons.go:69] Setting volcano=true in profile "addons-242829"
	I0908 13:41:12.917323  708040 addons.go:69] Setting volumesnapshots=true in profile "addons-242829"
	I0908 13:41:12.944632  708040 addons.go:238] Setting addon volumesnapshots=true in "addons-242829"
	I0908 13:41:12.944711  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.945359  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.977098  708040 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-242829"
	I0908 13:41:12.977245  708040 addons.go:238] Setting addon storage-provisioner=true in "addons-242829"
	I0908 13:41:12.979380  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.980053  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.985029  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.032887  708040 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 13:41:12.977660  708040 mustload.go:65] Loading cluster: addons-242829
	I0908 13:41:13.033226  708040 config.go:182] Loaded profile config "addons-242829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:41:13.033527  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.035873  708040 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 13:41:13.035903  708040 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 13:41:13.035976  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:12.977675  708040 addons.go:238] Setting addon ingress=true in "addons-242829"
	I0908 13:41:13.054527  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.055239  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.977683  708040 addons.go:238] Setting addon ingress-dns=true in "addons-242829"
	I0908 13:41:13.078285  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.081716  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 13:41:13.085170  708040 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 13:41:13.085189  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 13:41:13.089217  708040 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 13:41:13.089319  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.092352  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.112670  708040 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 13:41:13.112910  708040 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 13:41:13.081764  708040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:41:12.977731  708040 addons.go:238] Setting addon volcano=true in "addons-242829"
	I0908 13:41:13.113541  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.114209  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.125237  708040 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 13:41:13.125263  708040 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 13:41:13.125328  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:12.977722  708040 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-242829"
	I0908 13:41:13.127262  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.127549  708040 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 13:41:13.127884  708040 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 13:41:13.128029  708040 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0908 13:41:13.128036  708040 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0908 13:41:13.152612  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 13:41:13.165716  708040 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 13:41:13.167638  708040 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 13:41:13.167720  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 13:41:13.167854  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.175061  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.176094  708040 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 13:41:13.176360  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 13:41:13.176600  708040 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 13:41:13.176613  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 13:41:13.176683  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.139673  708040 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 13:41:13.179477  708040 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 13:41:13.143828  708040 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 13:41:13.179538  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 13:41:13.179611  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.199977  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.145748  708040 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 13:41:13.216342  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 13:41:13.216412  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.145775  708040 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 13:41:13.241089  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 13:41:13.241167  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.254969  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 13:41:13.260236  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 13:41:13.263108  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 13:41:13.272429  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 13:41:13.274669  708040 addons.go:238] Setting addon default-storageclass=true in "addons-242829"
	I0908 13:41:13.274717  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.275132  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.309127  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 13:41:13.311005  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.320636  708040 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:41:13.343605  708040 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 13:41:13.353259  708040 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:41:13.358081  708040 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:41:13.362017  708040 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:41:13.362090  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:41:13.362187  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.362268  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 13:41:13.362290  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	W0908 13:41:13.362364  708040 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 13:41:13.365483  708040 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 13:41:13.365510  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 13:41:13.365596  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.396020  708040 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 13:41:13.398928  708040 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 13:41:13.398952  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 13:41:13.399023  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.438514  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 13:41:13.438541  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 13:41:13.438607  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.440624  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.447963  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.448893  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.449592  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.458012  708040 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-242829"
	I0908 13:41:13.458052  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.458500  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.464271  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.464715  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.465036  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.520367  708040 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:41:13.520389  708040 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:41:13.520453  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.536823  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.539125  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.545027  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	W0908 13:41:13.549115  708040 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 13:41:13.549149  708040 retry.go:31] will retry after 231.314174ms: ssh: handshake failed: EOF
	I0908 13:41:13.557617  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.573134  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.590857  708040 out.go:179]   - Using image docker.io/busybox:stable
	I0908 13:41:13.593684  708040 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 13:41:13.597108  708040 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 13:41:13.597131  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 13:41:13.597200  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.633796  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	W0908 13:41:13.635002  708040 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 13:41:13.635068  708040 retry.go:31] will retry after 233.24099ms: ssh: handshake failed: EOF
	I0908 13:41:13.730239  708040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:41:13.774695  708040 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 13:41:13.774759  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 13:41:13.870576  708040 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 13:41:13.870684  708040 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 13:41:13.876325  708040 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:13.876477  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 13:41:13.978367  708040 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 13:41:13.978440  708040 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 13:41:14.016427  708040 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 13:41:14.016504  708040 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 13:41:14.021587  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 13:41:14.032758  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 13:41:14.051013  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:14.055794  708040 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 13:41:14.055821  708040 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 13:41:14.094847  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 13:41:14.094874  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 13:41:14.096400  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 13:41:14.122207  708040 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 13:41:14.122232  708040 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 13:41:14.140543  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:41:14.146518  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 13:41:14.156909  708040 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:41:14.156934  708040 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 13:41:14.170124  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:41:14.170940  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 13:41:14.173895  708040 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 13:41:14.173919  708040 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 13:41:14.205635  708040 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 13:41:14.205661  708040 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 13:41:14.226116  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 13:41:14.238082  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 13:41:14.238108  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 13:41:14.290603  708040 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 13:41:14.290627  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 13:41:14.357318  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:41:14.408835  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 13:41:14.417633  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 13:41:14.417658  708040 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 13:41:14.419396  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 13:41:14.419414  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 13:41:14.423299  708040 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 13:41:14.423324  708040 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 13:41:14.488219  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 13:41:14.571646  708040 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 13:41:14.571671  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 13:41:14.630158  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 13:41:14.630184  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 13:41:14.686802  708040 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:41:14.686827  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 13:41:14.781823  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 13:41:14.842568  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 13:41:14.842595  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 13:41:14.885259  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:41:14.955702  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 13:41:14.955727  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 13:41:15.077649  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 13:41:15.077677  708040 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0908 13:41:15.136580  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 13:41:15.136604  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 13:41:15.287985  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 13:41:15.288010  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0908 13:41:15.491540  708040 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.315411894s)
	I0908 13:41:15.491568  708040 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0908 13:41:15.492533  708040 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.762271015s)
	I0908 13:41:15.493140  708040 node_ready.go:35] waiting up to 6m0s for node "addons-242829" to be "Ready" ...
	I0908 13:41:15.539157  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 13:41:15.539183  708040 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 13:41:15.709395  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 13:41:16.918514  708040 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-242829" context rescaled to 1 replicas
	I0908 13:41:17.320004  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.298377643s)
	W0908 13:41:17.562136  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:17.635648  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.602848064s)
	I0908 13:41:17.916164  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.865114352s)
	W0908 13:41:17.916200  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:17.916217  708040 retry.go:31] will retry after 176.760493ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:17.916259  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.819838596s)
	I0908 13:41:17.916316  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.775751862s)
	I0908 13:41:17.916342  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.769802413s)
	I0908 13:41:17.916369  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.746224542s)
	I0908 13:41:17.916573  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.745611577s)
	I0908 13:41:18.093477  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:19.401450  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.175298853s)
	I0908 13:41:19.401944  708040 addons.go:479] Verifying addon ingress=true in "addons-242829"
	I0908 13:41:19.401662  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.992802354s)
	I0908 13:41:19.401687  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.913444294s)
	I0908 13:41:19.402442  708040 addons.go:479] Verifying addon registry=true in "addons-242829"
	I0908 13:41:19.401725  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.619875472s)
	I0908 13:41:19.401811  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.516522903s)
	W0908 13:41:19.403343  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 13:41:19.403360  708040 retry.go:31] will retry after 344.162618ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 13:41:19.401593  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.044247567s)
	I0908 13:41:19.403382  708040 addons.go:479] Verifying addon metrics-server=true in "addons-242829"
	I0908 13:41:19.405284  708040 out.go:179] * Verifying ingress addon...
	I0908 13:41:19.407186  708040 out.go:179] * Verifying registry addon...
	I0908 13:41:19.407194  708040 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-242829 service yakd-dashboard -n yakd-dashboard
	
	I0908 13:41:19.409994  708040 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 13:41:19.410066  708040 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 13:41:19.429418  708040 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 13:41:19.429439  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:19.429605  708040 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 13:41:19.429612  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:19.689486  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.980029831s)
	I0908 13:41:19.689576  708040 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-242829"
	I0908 13:41:19.689845  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.596336273s)
	W0908 13:41:19.689910  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:19.689946  708040 retry.go:31] will retry after 515.372899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:19.692711  708040 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 13:41:19.696455  708040 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 13:41:19.704002  708040 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 13:41:19.704125  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:19.748277  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:41:19.916207  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:19.916435  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:19.996475  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:20.200033  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:20.206363  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:20.415079  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:20.415292  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:20.711673  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:20.920044  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:20.920812  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:21.099876  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:21.099930  708040 retry.go:31] will retry after 311.135348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:21.200420  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:21.411494  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:21.415562  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:21.415733  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:21.700870  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:21.717496  708040 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 13:41:21.717622  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:21.739940  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:21.887668  708040 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 13:41:21.908933  708040 addons.go:238] Setting addon gcp-auth=true in "addons-242829"
	I0908 13:41:21.908996  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:21.909450  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:21.923512  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:21.923581  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:21.934318  708040 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 13:41:21.934371  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:21.962232  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	W0908 13:41:22.010891  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:22.201150  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 13:41:22.296909  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:22.296940  708040 retry.go:31] will retry after 1.017925099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:22.300158  708040 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 13:41:22.303251  708040 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:41:22.306025  708040 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 13:41:22.306043  708040 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 13:41:22.324387  708040 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 13:41:22.324467  708040 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 13:41:22.343564  708040 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 13:41:22.343587  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 13:41:22.363178  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 13:41:22.415496  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:22.416486  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:22.702569  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:22.881271  708040 addons.go:479] Verifying addon gcp-auth=true in "addons-242829"
	I0908 13:41:22.884593  708040 out.go:179] * Verifying gcp-auth addon...
	I0908 13:41:22.888126  708040 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 13:41:22.894334  708040 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 13:41:22.894359  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:22.914026  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:22.914313  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:23.200280  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:23.315506  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:23.392231  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:23.414679  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:23.415048  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:23.700340  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:23.900807  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:23.920598  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:23.921288  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:24.142370  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:24.142444  708040 retry.go:31] will retry after 834.215853ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:24.200374  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:24.391473  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:24.413698  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:24.414788  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:24.496489  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:24.699165  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:24.890915  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:24.913881  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:24.914346  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:24.977499  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:25.200372  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:25.391930  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:25.413654  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:25.414211  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:25.700735  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 13:41:25.786460  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:25.786538  708040 retry.go:31] will retry after 1.967547754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:25.891812  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:25.914784  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:25.915633  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:26.200578  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:26.391652  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:26.413824  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:26.414638  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:26.497328  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:26.699680  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:26.892563  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:26.913706  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:26.913945  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:27.200205  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:27.391347  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:27.413856  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:27.414478  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:27.700343  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:27.754653  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:27.892099  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:27.914189  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:27.915097  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:28.200116  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:28.391196  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:28.415182  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:28.415327  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:28.561320  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:28.561377  708040 retry.go:31] will retry after 1.770519939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:28.700308  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:28.891058  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:28.913510  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:28.913879  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:28.996767  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:29.199581  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:29.391442  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:29.413682  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:29.413803  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:29.700747  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:29.891524  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:29.913807  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:29.913959  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:30.199989  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:30.332392  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:30.391362  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:30.414685  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:30.414784  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:30.704565  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:30.891328  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:30.914411  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:30.916127  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:31.184026  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:31.184058  708040 retry.go:31] will retry after 3.136059055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:31.200222  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:31.391245  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:31.413602  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:31.414065  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:31.495794  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:31.700123  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:31.890950  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:31.914256  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:31.914539  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:32.200568  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:32.391514  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:32.413657  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:32.413869  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:32.699792  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:32.891599  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:32.913516  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:32.913848  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:33.199502  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:33.391394  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:33.413550  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:33.413672  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:33.496333  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:33.702665  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:33.892453  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:33.913843  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:33.914291  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:34.200451  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:34.320628  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:34.392860  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:34.414690  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:34.415301  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:34.700489  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:34.891405  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:34.914270  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:34.914948  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:35.153952  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:35.154032  708040 retry.go:31] will retry after 6.272645383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:35.200220  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:35.390999  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:35.414180  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:35.414198  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:35.497276  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:35.701456  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:35.891540  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:35.913467  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:35.913536  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:36.199794  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:36.391837  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:36.414592  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:36.414658  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:36.699656  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:36.891628  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:36.913833  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:36.914055  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:37.200753  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:37.391555  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:37.413842  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:37.413941  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:37.700001  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:37.892050  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:37.914398  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:37.914479  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:37.996443  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:38.201339  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:38.391548  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:38.413568  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:38.413732  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:38.699724  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:38.891655  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:38.914139  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:38.914340  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:39.200303  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:39.390837  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:39.413824  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:39.413922  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:39.700051  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:39.892200  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:39.913005  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:39.913454  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:40.199768  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:40.392040  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:40.413929  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:40.414613  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:40.498749  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:40.700833  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:40.891661  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:40.914165  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:40.914306  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:41.199813  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:41.391771  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:41.414076  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:41.420172  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:41.427230  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:41.700824  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:41.891854  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:41.919928  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:41.920210  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:42.201312  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 13:41:42.293578  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:42.293615  708040 retry.go:31] will retry after 8.145627481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:42.391580  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:42.414014  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:42.414444  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:42.700214  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:42.891546  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:42.913957  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:42.914024  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:42.997097  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:43.200168  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:43.391027  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:43.413206  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:43.413432  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:43.700243  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:43.891203  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:43.913283  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:43.913593  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:44.200399  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:44.391231  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:44.413526  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:44.413648  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:44.700174  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:44.891107  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:44.913247  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:44.914030  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:45.201531  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:45.392114  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:45.414492  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:45.414801  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:45.496764  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:45.699914  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:45.891004  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:45.914432  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:45.914546  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:46.199786  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:46.391795  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:46.414263  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:46.414841  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:46.699842  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:46.891711  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:46.913996  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:46.914323  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:47.199953  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:47.392015  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:47.414200  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:47.414543  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:47.700090  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:47.891105  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:47.913217  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:47.913214  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:47.996801  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:48.199853  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:48.391787  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:48.414160  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:48.414399  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:48.699631  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:48.891666  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:48.914222  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:48.914282  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:49.199837  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:49.391694  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:49.413503  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:49.414564  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:49.699482  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:49.891523  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:49.913520  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:49.913535  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:50.199630  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:50.392757  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:50.413700  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:50.413845  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:50.440019  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0908 13:41:50.497751  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:50.700673  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:50.891181  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:50.914248  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:50.914776  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:51.200855  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 13:41:51.245912  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:51.245945  708040 retry.go:31] will retry after 9.644713629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:51.391633  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:51.414175  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:51.415684  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:51.700275  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:51.891122  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:51.913294  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:51.913401  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:52.199762  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:52.392392  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:52.413654  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:52.413786  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:52.699608  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:52.891857  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:52.915434  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:52.917088  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:52.997035  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:53.200333  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:53.391316  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:53.413634  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:53.413799  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:53.699878  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:53.891935  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:53.914060  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:53.914510  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:54.199897  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:54.391924  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:54.414287  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:54.414443  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:54.700155  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:54.891221  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:54.913598  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:54.913712  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:55.199375  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:55.392506  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:55.413303  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:55.413448  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:55.496218  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:55.700454  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:55.891341  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:55.913539  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:55.913655  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:56.200072  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:56.391036  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:56.414115  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:56.414176  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:56.711373  708040 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 13:41:56.711399  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:56.934983  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:56.938037  708040 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 13:41:56.938064  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:56.949588  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:57.019553  708040 node_ready.go:49] node "addons-242829" is "Ready"
	I0908 13:41:57.019588  708040 node_ready.go:38] duration metric: took 41.526420739s for node "addons-242829" to be "Ready" ...
	I0908 13:41:57.019612  708040 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:41:57.019691  708040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:41:57.049579  708040 api_server.go:72] duration metric: took 44.192878889s to wait for apiserver process to appear ...
	I0908 13:41:57.049606  708040 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:41:57.049626  708040 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0908 13:41:57.062697  708040 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0908 13:41:57.068831  708040 api_server.go:141] control plane version: v1.34.0
	I0908 13:41:57.068864  708040 api_server.go:131] duration metric: took 19.250696ms to wait for apiserver health ...
	I0908 13:41:57.068873  708040 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:41:57.081569  708040 system_pods.go:59] 19 kube-system pods found
	I0908 13:41:57.081606  708040 system_pods.go:61] "coredns-66bc5c9577-gnj8l" [956ba41e-f66c-4b4b-848a-e870dd479e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:57.081614  708040 system_pods.go:61] "csi-hostpath-attacher-0" [3243f267-5bc4-481f-9ca1-9c6a2d5f2d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:41:57.081641  708040 system_pods.go:61] "csi-hostpath-resizer-0" [f3966cb8-b9a9-48c7-9a21-531ed447f646] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:41:57.081649  708040 system_pods.go:61] "csi-hostpathplugin-d5xdd" [ddd98c4e-7ff5-46c8-b2b9-cbeea00b0f96] Pending
	I0908 13:41:57.081659  708040 system_pods.go:61] "etcd-addons-242829" [e301b1f9-76c3-4c13-b483-f19be0496e6f] Running
	I0908 13:41:57.081664  708040 system_pods.go:61] "kindnet-hkztp" [092d93ec-a85e-4fb2-861f-68fefd75cfee] Running
	I0908 13:41:57.081673  708040 system_pods.go:61] "kube-apiserver-addons-242829" [c0d53905-503c-4452-89a9-ce8988f9cc33] Running
	I0908 13:41:57.081678  708040 system_pods.go:61] "kube-controller-manager-addons-242829" [eb01bf44-55ba-4acf-bbd0-be395ffd2516] Running
	I0908 13:41:57.081685  708040 system_pods.go:61] "kube-ingress-dns-minikube" [e9e939a1-8af2-4645-9e83-2ce6a61144c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:41:57.081695  708040 system_pods.go:61] "kube-proxy-sn2lk" [9671145d-80cd-408f-9b0e-86f834714336] Running
	I0908 13:41:57.081701  708040 system_pods.go:61] "kube-scheduler-addons-242829" [5b19df33-db9b-4e94-a78e-9989e008ce79] Running
	I0908 13:41:57.081714  708040 system_pods.go:61] "metrics-server-85b7d694d7-hhpn4" [72b3cfa9-428a-4c02-add5-147e52588ae7] Pending
	I0908 13:41:57.081723  708040 system_pods.go:61] "nvidia-device-plugin-daemonset-g8fsg" [f115ce75-ec51-411e-a28c-6f0ad7f20cd9] Pending
	I0908 13:41:57.081730  708040 system_pods.go:61] "registry-66898fdd98-rsbjh" [f14de69e-d212-4a48-81f3-6e0b493a2d62] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:41:57.081738  708040 system_pods.go:61] "registry-creds-764b6fb674-sptcc" [e6410ae0-5f50-4418-9999-9f6c35305dd4] Pending
	I0908 13:41:57.081747  708040 system_pods.go:61] "registry-proxy-vctb4" [ae615f1b-8f69-4eb7-9497-285649bc6650] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:41:57.081751  708040 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8dws5" [a724858d-50d9-464b-901e-d23d41f48c5d] Pending
	I0908 13:41:57.081761  708040 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tr6cd" [9041b71c-c126-42b3-a375-c42b09696629] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.081772  708040 system_pods.go:61] "storage-provisioner" [4fd8885b-2e71-4639-9830-2078a88e5493] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:41:57.081789  708040 system_pods.go:74] duration metric: took 12.89997ms to wait for pod list to return data ...
	I0908 13:41:57.081802  708040 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:41:57.085785  708040 default_sa.go:45] found service account: "default"
	I0908 13:41:57.085814  708040 default_sa.go:55] duration metric: took 4.005029ms for default service account to be created ...
	I0908 13:41:57.085834  708040 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:41:57.091056  708040 system_pods.go:86] 19 kube-system pods found
	I0908 13:41:57.091092  708040 system_pods.go:89] "coredns-66bc5c9577-gnj8l" [956ba41e-f66c-4b4b-848a-e870dd479e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:57.091102  708040 system_pods.go:89] "csi-hostpath-attacher-0" [3243f267-5bc4-481f-9ca1-9c6a2d5f2d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:41:57.091119  708040 system_pods.go:89] "csi-hostpath-resizer-0" [f3966cb8-b9a9-48c7-9a21-531ed447f646] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:41:57.091124  708040 system_pods.go:89] "csi-hostpathplugin-d5xdd" [ddd98c4e-7ff5-46c8-b2b9-cbeea00b0f96] Pending
	I0908 13:41:57.091130  708040 system_pods.go:89] "etcd-addons-242829" [e301b1f9-76c3-4c13-b483-f19be0496e6f] Running
	I0908 13:41:57.091135  708040 system_pods.go:89] "kindnet-hkztp" [092d93ec-a85e-4fb2-861f-68fefd75cfee] Running
	I0908 13:41:57.091143  708040 system_pods.go:89] "kube-apiserver-addons-242829" [c0d53905-503c-4452-89a9-ce8988f9cc33] Running
	I0908 13:41:57.091147  708040 system_pods.go:89] "kube-controller-manager-addons-242829" [eb01bf44-55ba-4acf-bbd0-be395ffd2516] Running
	I0908 13:41:57.091160  708040 system_pods.go:89] "kube-ingress-dns-minikube" [e9e939a1-8af2-4645-9e83-2ce6a61144c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:41:57.091165  708040 system_pods.go:89] "kube-proxy-sn2lk" [9671145d-80cd-408f-9b0e-86f834714336] Running
	I0908 13:41:57.091170  708040 system_pods.go:89] "kube-scheduler-addons-242829" [5b19df33-db9b-4e94-a78e-9989e008ce79] Running
	I0908 13:41:57.091179  708040 system_pods.go:89] "metrics-server-85b7d694d7-hhpn4" [72b3cfa9-428a-4c02-add5-147e52588ae7] Pending
	I0908 13:41:57.091191  708040 system_pods.go:89] "nvidia-device-plugin-daemonset-g8fsg" [f115ce75-ec51-411e-a28c-6f0ad7f20cd9] Pending
	I0908 13:41:57.091202  708040 system_pods.go:89] "registry-66898fdd98-rsbjh" [f14de69e-d212-4a48-81f3-6e0b493a2d62] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:41:57.091208  708040 system_pods.go:89] "registry-creds-764b6fb674-sptcc" [e6410ae0-5f50-4418-9999-9f6c35305dd4] Pending
	I0908 13:41:57.091225  708040 system_pods.go:89] "registry-proxy-vctb4" [ae615f1b-8f69-4eb7-9497-285649bc6650] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:41:57.091229  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dws5" [a724858d-50d9-464b-901e-d23d41f48c5d] Pending
	I0908 13:41:57.091235  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr6cd" [9041b71c-c126-42b3-a375-c42b09696629] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.091241  708040 system_pods.go:89] "storage-provisioner" [4fd8885b-2e71-4639-9830-2078a88e5493] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:41:57.091268  708040 retry.go:31] will retry after 206.400197ms: missing components: kube-dns
	I0908 13:41:57.225230  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:57.382621  708040 system_pods.go:86] 19 kube-system pods found
	I0908 13:41:57.382672  708040 system_pods.go:89] "coredns-66bc5c9577-gnj8l" [956ba41e-f66c-4b4b-848a-e870dd479e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:57.382688  708040 system_pods.go:89] "csi-hostpath-attacher-0" [3243f267-5bc4-481f-9ca1-9c6a2d5f2d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:41:57.382697  708040 system_pods.go:89] "csi-hostpath-resizer-0" [f3966cb8-b9a9-48c7-9a21-531ed447f646] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:41:57.382709  708040 system_pods.go:89] "csi-hostpathplugin-d5xdd" [ddd98c4e-7ff5-46c8-b2b9-cbeea00b0f96] Pending
	I0908 13:41:57.382718  708040 system_pods.go:89] "etcd-addons-242829" [e301b1f9-76c3-4c13-b483-f19be0496e6f] Running
	I0908 13:41:57.382730  708040 system_pods.go:89] "kindnet-hkztp" [092d93ec-a85e-4fb2-861f-68fefd75cfee] Running
	I0908 13:41:57.382734  708040 system_pods.go:89] "kube-apiserver-addons-242829" [c0d53905-503c-4452-89a9-ce8988f9cc33] Running
	I0908 13:41:57.382740  708040 system_pods.go:89] "kube-controller-manager-addons-242829" [eb01bf44-55ba-4acf-bbd0-be395ffd2516] Running
	I0908 13:41:57.382751  708040 system_pods.go:89] "kube-ingress-dns-minikube" [e9e939a1-8af2-4645-9e83-2ce6a61144c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:41:57.382760  708040 system_pods.go:89] "kube-proxy-sn2lk" [9671145d-80cd-408f-9b0e-86f834714336] Running
	I0908 13:41:57.382765  708040 system_pods.go:89] "kube-scheduler-addons-242829" [5b19df33-db9b-4e94-a78e-9989e008ce79] Running
	I0908 13:41:57.382770  708040 system_pods.go:89] "metrics-server-85b7d694d7-hhpn4" [72b3cfa9-428a-4c02-add5-147e52588ae7] Pending
	I0908 13:41:57.382785  708040 system_pods.go:89] "nvidia-device-plugin-daemonset-g8fsg" [f115ce75-ec51-411e-a28c-6f0ad7f20cd9] Pending
	I0908 13:41:57.382791  708040 system_pods.go:89] "registry-66898fdd98-rsbjh" [f14de69e-d212-4a48-81f3-6e0b493a2d62] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:41:57.382798  708040 system_pods.go:89] "registry-creds-764b6fb674-sptcc" [e6410ae0-5f50-4418-9999-9f6c35305dd4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:41:57.382816  708040 system_pods.go:89] "registry-proxy-vctb4" [ae615f1b-8f69-4eb7-9497-285649bc6650] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:41:57.382829  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dws5" [a724858d-50d9-464b-901e-d23d41f48c5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.382841  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr6cd" [9041b71c-c126-42b3-a375-c42b09696629] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.382851  708040 system_pods.go:89] "storage-provisioner" [4fd8885b-2e71-4639-9830-2078a88e5493] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:41:57.382866  708040 retry.go:31] will retry after 281.461125ms: missing components: kube-dns
	I0908 13:41:57.566825  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:57.566979  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:57.567042  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:57.673634  708040 system_pods.go:86] 19 kube-system pods found
	I0908 13:41:57.673677  708040 system_pods.go:89] "coredns-66bc5c9577-gnj8l" [956ba41e-f66c-4b4b-848a-e870dd479e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:57.673687  708040 system_pods.go:89] "csi-hostpath-attacher-0" [3243f267-5bc4-481f-9ca1-9c6a2d5f2d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:41:57.673694  708040 system_pods.go:89] "csi-hostpath-resizer-0" [f3966cb8-b9a9-48c7-9a21-531ed447f646] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:41:57.673699  708040 system_pods.go:89] "csi-hostpathplugin-d5xdd" [ddd98c4e-7ff5-46c8-b2b9-cbeea00b0f96] Pending
	I0908 13:41:57.673705  708040 system_pods.go:89] "etcd-addons-242829" [e301b1f9-76c3-4c13-b483-f19be0496e6f] Running
	I0908 13:41:57.673709  708040 system_pods.go:89] "kindnet-hkztp" [092d93ec-a85e-4fb2-861f-68fefd75cfee] Running
	I0908 13:41:57.673713  708040 system_pods.go:89] "kube-apiserver-addons-242829" [c0d53905-503c-4452-89a9-ce8988f9cc33] Running
	I0908 13:41:57.673719  708040 system_pods.go:89] "kube-controller-manager-addons-242829" [eb01bf44-55ba-4acf-bbd0-be395ffd2516] Running
	I0908 13:41:57.673729  708040 system_pods.go:89] "kube-ingress-dns-minikube" [e9e939a1-8af2-4645-9e83-2ce6a61144c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:41:57.673734  708040 system_pods.go:89] "kube-proxy-sn2lk" [9671145d-80cd-408f-9b0e-86f834714336] Running
	I0908 13:41:57.673739  708040 system_pods.go:89] "kube-scheduler-addons-242829" [5b19df33-db9b-4e94-a78e-9989e008ce79] Running
	I0908 13:41:57.673751  708040 system_pods.go:89] "metrics-server-85b7d694d7-hhpn4" [72b3cfa9-428a-4c02-add5-147e52588ae7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:41:57.673756  708040 system_pods.go:89] "nvidia-device-plugin-daemonset-g8fsg" [f115ce75-ec51-411e-a28c-6f0ad7f20cd9] Pending
	I0908 13:41:57.673761  708040 system_pods.go:89] "registry-66898fdd98-rsbjh" [f14de69e-d212-4a48-81f3-6e0b493a2d62] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:41:57.673773  708040 system_pods.go:89] "registry-creds-764b6fb674-sptcc" [e6410ae0-5f50-4418-9999-9f6c35305dd4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:41:57.673779  708040 system_pods.go:89] "registry-proxy-vctb4" [ae615f1b-8f69-4eb7-9497-285649bc6650] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:41:57.673787  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dws5" [a724858d-50d9-464b-901e-d23d41f48c5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.673800  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr6cd" [9041b71c-c126-42b3-a375-c42b09696629] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.673805  708040 system_pods.go:89] "storage-provisioner" [4fd8885b-2e71-4639-9830-2078a88e5493] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:41:57.673851  708040 retry.go:31] will retry after 355.091475ms: missing components: kube-dns
	I0908 13:41:57.728459  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:57.892378  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:57.914406  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:57.914736  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:58.034165  708040 system_pods.go:86] 19 kube-system pods found
	I0908 13:41:58.034205  708040 system_pods.go:89] "coredns-66bc5c9577-gnj8l" [956ba41e-f66c-4b4b-848a-e870dd479e6e] Running
	I0908 13:41:58.034216  708040 system_pods.go:89] "csi-hostpath-attacher-0" [3243f267-5bc4-481f-9ca1-9c6a2d5f2d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:41:58.034223  708040 system_pods.go:89] "csi-hostpath-resizer-0" [f3966cb8-b9a9-48c7-9a21-531ed447f646] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:41:58.034231  708040 system_pods.go:89] "csi-hostpathplugin-d5xdd" [ddd98c4e-7ff5-46c8-b2b9-cbeea00b0f96] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 13:41:58.034236  708040 system_pods.go:89] "etcd-addons-242829" [e301b1f9-76c3-4c13-b483-f19be0496e6f] Running
	I0908 13:41:58.034242  708040 system_pods.go:89] "kindnet-hkztp" [092d93ec-a85e-4fb2-861f-68fefd75cfee] Running
	I0908 13:41:58.034246  708040 system_pods.go:89] "kube-apiserver-addons-242829" [c0d53905-503c-4452-89a9-ce8988f9cc33] Running
	I0908 13:41:58.034250  708040 system_pods.go:89] "kube-controller-manager-addons-242829" [eb01bf44-55ba-4acf-bbd0-be395ffd2516] Running
	I0908 13:41:58.034257  708040 system_pods.go:89] "kube-ingress-dns-minikube" [e9e939a1-8af2-4645-9e83-2ce6a61144c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:41:58.034261  708040 system_pods.go:89] "kube-proxy-sn2lk" [9671145d-80cd-408f-9b0e-86f834714336] Running
	I0908 13:41:58.034267  708040 system_pods.go:89] "kube-scheduler-addons-242829" [5b19df33-db9b-4e94-a78e-9989e008ce79] Running
	I0908 13:41:58.034273  708040 system_pods.go:89] "metrics-server-85b7d694d7-hhpn4" [72b3cfa9-428a-4c02-add5-147e52588ae7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:41:58.034285  708040 system_pods.go:89] "nvidia-device-plugin-daemonset-g8fsg" [f115ce75-ec51-411e-a28c-6f0ad7f20cd9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 13:41:58.034302  708040 system_pods.go:89] "registry-66898fdd98-rsbjh" [f14de69e-d212-4a48-81f3-6e0b493a2d62] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:41:58.034308  708040 system_pods.go:89] "registry-creds-764b6fb674-sptcc" [e6410ae0-5f50-4418-9999-9f6c35305dd4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:41:58.034314  708040 system_pods.go:89] "registry-proxy-vctb4" [ae615f1b-8f69-4eb7-9497-285649bc6650] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:41:58.034323  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dws5" [a724858d-50d9-464b-901e-d23d41f48c5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:58.034330  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr6cd" [9041b71c-c126-42b3-a375-c42b09696629] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:58.034337  708040 system_pods.go:89] "storage-provisioner" [4fd8885b-2e71-4639-9830-2078a88e5493] Running
	I0908 13:41:58.034346  708040 system_pods.go:126] duration metric: took 948.503808ms to wait for k8s-apps to be running ...
	I0908 13:41:58.034353  708040 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 13:41:58.034413  708040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:41:58.046675  708040 system_svc.go:56] duration metric: took 12.311351ms WaitForService to wait for kubelet
	I0908 13:41:58.046704  708040 kubeadm.go:578] duration metric: took 45.190021561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:41:58.046723  708040 node_conditions.go:102] verifying NodePressure condition ...
	I0908 13:41:58.050746  708040 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 13:41:58.050780  708040 node_conditions.go:123] node cpu capacity is 2
	I0908 13:41:58.050794  708040 node_conditions.go:105] duration metric: took 4.065304ms to run NodePressure ...
	I0908 13:41:58.050807  708040 start.go:241] waiting for startup goroutines ...
	I0908 13:41:58.199529  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:58.392033  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:58.492573  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:58.492572  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:58.699695  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:58.892010  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:58.913582  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:58.913660  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:59.202061  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:59.391992  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:59.414658  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:59.415011  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:59.700822  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:59.893153  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:59.914943  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:59.915255  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:00.203583  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:00.395107  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:00.416597  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:00.422923  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:00.702077  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:00.891772  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:42:00.894942  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:00.919178  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:00.919549  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:01.200098  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:01.390899  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:01.414991  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:01.415950  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:01.708233  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:01.892047  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:01.914926  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:01.915117  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:02.057004  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.165196006s)
	W0908 13:42:02.057044  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:42:02.057065  708040 retry.go:31] will retry after 13.278193176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:42:02.201288  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:02.391743  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:02.415314  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:02.415517  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:02.700021  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:02.892200  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:02.913938  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:02.914055  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:03.200870  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:03.391020  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:03.413893  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:03.414049  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:03.704273  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:03.911452  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:03.939772  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:03.951983  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:04.200704  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:04.392476  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:04.416675  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:04.418078  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:04.708637  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:04.892806  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:04.916241  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:04.916675  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:05.200541  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:05.392439  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:05.417597  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:05.419148  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:05.700913  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:05.901002  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:05.925950  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:05.926359  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:06.200044  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:06.392128  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:06.414738  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:06.414818  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:06.704157  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:06.891942  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:06.936246  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:06.936383  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:07.199387  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:07.391732  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:07.415433  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:07.415519  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:07.700478  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:07.904197  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:07.915680  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:08.005718  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:08.199743  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:08.391527  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:08.414516  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:08.414657  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:08.700834  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:08.891999  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:08.913876  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:08.914421  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:09.201248  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:09.402491  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:09.502944  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:09.503470  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:09.699929  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:09.899606  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:09.927376  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:09.927904  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:10.200596  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:10.392667  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:10.416487  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:10.418102  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:10.705038  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:10.891719  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:10.917777  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:10.918158  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:11.200737  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:11.392926  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:11.414664  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:11.415437  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:11.701301  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:11.892200  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:11.917313  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:11.925334  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:12.201209  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:12.391822  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:12.417373  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:12.417850  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:12.703937  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:12.891911  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:12.918853  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:12.918971  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:13.200511  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:13.391860  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:13.415644  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:13.415804  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:13.700005  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:13.894852  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:13.920852  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:13.921548  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:14.200826  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:14.392721  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:14.415285  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:14.415634  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:14.699879  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:14.892183  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:14.915455  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:14.915721  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:15.209597  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:15.336265  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:42:15.391878  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:15.416328  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:15.416458  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:15.705682  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:15.891947  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:15.922592  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:15.922729  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:16.203768  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:16.405506  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:16.420961  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:16.421351  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:16.437458  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.101153275s)
	W0908 13:42:16.437498  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:42:16.437516  708040 retry.go:31] will retry after 46.893704239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:42:16.702477  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:16.894974  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:16.916846  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:16.917240  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:17.203294  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:17.392105  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:17.414027  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:17.414341  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:17.702085  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:17.895313  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:17.917126  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:17.917638  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:18.200715  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:18.394735  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:18.418049  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:18.418161  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:18.725785  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:18.892066  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:18.915212  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:18.915343  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:19.200012  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:19.391319  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:19.414957  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:19.415101  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:19.700790  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:19.892192  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:19.915034  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:19.915164  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:20.200616  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:20.391950  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:20.417046  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:20.417168  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:20.702794  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:20.894109  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:20.913812  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:20.914256  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:21.200181  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:21.391295  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:21.415065  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:21.416435  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:21.699773  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:21.891910  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:21.950868  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:21.952350  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:22.202275  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:22.391663  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:22.417138  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:22.417592  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:22.702111  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:22.891612  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:22.916360  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:22.916849  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:23.200210  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:23.391942  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:23.418637  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:23.418828  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:23.708945  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:23.892107  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:23.915071  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:23.915415  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:24.201010  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:24.391471  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:24.415964  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:24.416401  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:24.703181  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:24.892030  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:24.929112  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:24.932658  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:25.203165  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:25.391569  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:25.416386  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:25.416603  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:25.700342  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:25.892909  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:25.916273  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:25.916633  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:26.201912  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:26.397695  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:26.415829  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:26.416224  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:26.701053  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:26.898778  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:26.931277  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:26.931678  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:27.203555  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:27.392291  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:27.415043  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:27.416406  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:27.703185  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:27.891919  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:27.914153  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:27.914383  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:28.199856  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:28.392399  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:28.414108  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:28.414409  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:28.701720  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:28.891673  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:28.916785  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:28.917244  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:29.200064  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:29.392525  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:29.415936  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:29.416451  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:29.699779  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:29.891337  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:29.913420  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:29.913840  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:30.200731  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:30.391498  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:30.414937  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:30.415064  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:30.700714  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:30.892927  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:30.915982  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:30.916730  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:31.199687  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:31.392193  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:31.414979  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:31.416500  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:31.701564  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:31.891028  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:31.914094  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:31.914181  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:32.200584  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:32.391262  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:32.413807  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:32.414761  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:32.699960  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:32.890927  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:32.913071  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:32.913328  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:33.200603  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:33.391987  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:33.414312  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:33.414561  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:33.699776  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:33.891906  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:33.914481  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:33.914772  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:34.201052  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:34.392262  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:34.415850  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:34.416274  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:34.700643  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:34.891831  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:34.915387  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:34.915655  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:35.200468  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:35.392164  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:35.414092  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:35.415188  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:35.700572  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:35.891808  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:35.916037  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:35.916404  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:36.199814  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:36.392620  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:36.415501  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:36.415875  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:36.705161  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:36.906264  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:36.920416  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:36.920785  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:37.202391  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:37.391728  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:37.414885  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:37.415009  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:37.701029  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:37.891477  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:37.915580  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:37.921900  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:38.200907  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:38.393091  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:38.415535  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:38.416116  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:38.701379  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:38.894374  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:38.921424  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:38.921748  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:39.200396  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:39.392309  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:39.426132  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:39.426772  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:39.705841  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:39.891924  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:39.916625  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:39.917844  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:40.202588  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:40.392206  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:40.416275  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:40.416772  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:40.718333  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:40.892901  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:40.917001  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:40.919710  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:41.201121  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:41.391412  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:41.413128  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:41.413679  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:41.700536  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:41.893881  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:41.919616  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:41.920805  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:42.207641  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:42.392775  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:42.415942  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:42.416217  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:42.701381  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:42.891537  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:42.916114  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:42.916274  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:43.203450  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:43.392649  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:43.415899  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:43.417881  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:43.700500  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:43.891954  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:43.913904  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:43.914686  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:44.200037  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:44.392428  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:44.414058  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:44.414205  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:44.701191  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:44.891366  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:44.914673  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:44.915174  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:45.211385  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:45.391890  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:45.415605  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:45.416202  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:45.700879  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:45.891398  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:45.913487  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:45.914138  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:46.201370  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:46.393781  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:46.414378  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:46.415167  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:46.708882  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:46.891913  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:46.919176  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:46.919314  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:47.200847  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:47.392495  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:47.415605  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:47.416670  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:47.701988  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:47.893607  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:47.916062  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:47.916493  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:48.200024  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:48.393365  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:48.494033  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:48.494432  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:48.700990  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:48.891205  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:48.913924  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:48.913974  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:49.201352  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:49.391165  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:49.413892  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:49.414319  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:49.699974  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:49.891206  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:49.913415  708040 kapi.go:107] duration metric: took 1m30.503424251s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 13:42:49.913771  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:50.201701  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:50.392115  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:50.413272  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:50.700431  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:50.891054  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:50.914047  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:51.200252  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:51.391821  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:51.414127  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:51.700432  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:51.891305  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:51.913387  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:52.199895  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:52.392334  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:52.413358  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:52.702550  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:52.892065  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:52.913299  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:53.199654  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:53.391779  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:53.413998  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:53.700250  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:53.890775  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:53.913816  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:54.199932  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:54.391273  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:54.413039  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:54.700347  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:54.891254  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:54.913304  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:55.199475  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:55.391610  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:55.413532  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:55.700171  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:55.890999  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:55.913141  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:56.200177  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:56.391547  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:56.413528  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:56.700265  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:56.891921  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:56.913897  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:57.200280  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:57.392110  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:57.413874  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:57.699812  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:57.891615  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:57.913419  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:58.199473  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:58.392019  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:58.413787  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:58.699991  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:58.890942  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:58.914010  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:59.200117  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:59.391503  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:59.413512  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:59.700005  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:59.890963  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:59.913087  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:00.208476  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:00.393701  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:00.414355  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:00.700071  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:00.891211  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:00.914686  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:01.200551  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:01.392243  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:01.413413  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:01.700338  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:01.891246  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:01.913154  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:02.200313  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:02.391080  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:02.414088  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:02.699678  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:02.891310  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:02.913425  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:03.200952  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:03.332230  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:43:03.391613  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:03.415063  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:03.701506  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:03.892585  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:03.914121  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:43:04.177001  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 13:43:04.177107  708040 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 13:43:04.199889  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:04.392409  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:04.413633  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:04.700061  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:04.891196  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:04.913078  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:05.200510  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:05.391889  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:05.413959  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:05.700719  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:05.891708  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:05.913749  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:06.199566  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:06.391930  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:06.414009  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:06.700787  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:06.891766  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:06.913788  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:07.200138  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:07.391378  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:07.414123  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:07.700521  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:07.891667  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:07.913795  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:08.200271  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:08.392522  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:08.413949  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:08.700268  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:08.890981  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:08.916396  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:09.200019  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:09.391071  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:09.413985  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:09.700852  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:09.897461  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:09.913593  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:10.200434  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:10.392683  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:10.417370  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:10.708550  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:10.892960  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:10.915962  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:11.200552  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:11.391586  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:11.414039  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:11.700585  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:11.892126  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:11.913631  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:12.203273  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:12.391140  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:12.414913  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:12.700619  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:12.897534  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:12.913904  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:13.200303  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:13.391458  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:13.413511  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:13.703099  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:13.891269  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:13.913228  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:14.199716  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:14.396647  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:14.493268  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:14.701683  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:14.892415  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:14.915121  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:15.200726  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:15.393225  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:15.416842  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:15.703215  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:15.893632  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:15.914839  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:16.201247  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:16.391968  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:16.413081  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:16.701902  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:16.892358  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:16.913494  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:17.201206  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:17.391913  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:17.414124  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:17.701648  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:17.891970  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:17.913790  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:18.200204  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:18.392444  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:18.414315  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:18.701556  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:18.893288  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:18.914479  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:19.200197  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:19.391302  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:19.413902  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:19.700646  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:19.892919  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:19.914243  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:20.201524  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:20.392154  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:20.414313  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:20.709745  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:20.891351  708040 kapi.go:107] duration metric: took 1m58.003225632s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 13:43:20.894429  708040 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-242829 cluster.
	I0908 13:43:20.897459  708040 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 13:43:20.900396  708040 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 13:43:20.913059  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:21.201554  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:21.414609  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:21.704785  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:21.916471  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:22.200628  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:22.423967  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:22.728636  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:22.920898  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:23.199924  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:23.413333  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:23.700443  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:23.913832  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:24.204918  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:24.414042  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:24.701469  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:24.913841  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:25.200380  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:25.413926  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:25.702936  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:25.916761  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:26.200246  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:26.414306  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:26.700721  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:26.917694  708040 kapi.go:107] duration metric: took 2m7.507624242s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 13:43:27.205486  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:27.735420  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:28.200558  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:28.702172  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:29.235238  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:29.702067  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:30.200421  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:30.699845  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:31.202478  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:31.699646  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:32.203036  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:32.700330  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:33.201044  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:33.699719  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:34.200974  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:34.700713  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:35.201629  708040 kapi.go:107] duration metric: took 2m15.505172409s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 13:43:35.207128  708040 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, registry-creds, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0908 13:43:35.210029  708040 addons.go:514] duration metric: took 2m22.352955183s for enable addons: enabled=[cloud-spanner ingress-dns registry-creds storage-provisioner amd-gpu-device-plugin nvidia-device-plugin default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0908 13:43:35.210091  708040 start.go:246] waiting for cluster config update ...
	I0908 13:43:35.210117  708040 start.go:255] writing updated cluster config ...
	I0908 13:43:35.210436  708040 ssh_runner.go:195] Run: rm -f paused
	I0908 13:43:35.214047  708040 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:43:35.217576  708040 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gnj8l" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.223440  708040 pod_ready.go:94] pod "coredns-66bc5c9577-gnj8l" is "Ready"
	I0908 13:43:35.223470  708040 pod_ready.go:86] duration metric: took 5.863456ms for pod "coredns-66bc5c9577-gnj8l" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.225882  708040 pod_ready.go:83] waiting for pod "etcd-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.230367  708040 pod_ready.go:94] pod "etcd-addons-242829" is "Ready"
	I0908 13:43:35.230456  708040 pod_ready.go:86] duration metric: took 4.547931ms for pod "etcd-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.232968  708040 pod_ready.go:83] waiting for pod "kube-apiserver-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.237856  708040 pod_ready.go:94] pod "kube-apiserver-addons-242829" is "Ready"
	I0908 13:43:35.237884  708040 pod_ready.go:86] duration metric: took 4.890221ms for pod "kube-apiserver-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.240402  708040 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.617819  708040 pod_ready.go:94] pod "kube-controller-manager-addons-242829" is "Ready"
	I0908 13:43:35.617863  708040 pod_ready.go:86] duration metric: took 377.426882ms for pod "kube-controller-manager-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.818155  708040 pod_ready.go:83] waiting for pod "kube-proxy-sn2lk" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:36.218628  708040 pod_ready.go:94] pod "kube-proxy-sn2lk" is "Ready"
	I0908 13:43:36.218660  708040 pod_ready.go:86] duration metric: took 400.478423ms for pod "kube-proxy-sn2lk" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:36.418841  708040 pod_ready.go:83] waiting for pod "kube-scheduler-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:36.818150  708040 pod_ready.go:94] pod "kube-scheduler-addons-242829" is "Ready"
	I0908 13:43:36.818180  708040 pod_ready.go:86] duration metric: took 399.309034ms for pod "kube-scheduler-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:36.818194  708040 pod_ready.go:40] duration metric: took 1.604108577s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:43:36.873448  708040 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 13:43:36.876810  708040 out.go:179] * Done! kubectl is now configured to use "addons-242829" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.020639177Z" level=info msg="Stopped pod sandbox (already stopped): 5f9bc92a09b167aa2b23dc47105fcc90b397fe27f78ca087df290d215b228cee" id=bfb209de-f5de-469f-928a-d5e1f09bf40e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.021103742Z" level=info msg="Removing pod sandbox: 5f9bc92a09b167aa2b23dc47105fcc90b397fe27f78ca087df290d215b228cee" id=38391bb7-58f8-4f93-8141-1c7765118163 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.033252847Z" level=info msg="Removed pod sandbox: 5f9bc92a09b167aa2b23dc47105fcc90b397fe27f78ca087df290d215b228cee" id=38391bb7-58f8-4f93-8141-1c7765118163 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.033747525Z" level=info msg="Stopping pod sandbox: 27dd9d90ed5b0a103ef704383dcfd75ab5fcd20d735b21bec9a04cd94df73436" id=4370e46e-051e-492c-bfa0-1bc271e939b6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.033855465Z" level=info msg="Stopped pod sandbox (already stopped): 27dd9d90ed5b0a103ef704383dcfd75ab5fcd20d735b21bec9a04cd94df73436" id=4370e46e-051e-492c-bfa0-1bc271e939b6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.034257415Z" level=info msg="Removing pod sandbox: 27dd9d90ed5b0a103ef704383dcfd75ab5fcd20d735b21bec9a04cd94df73436" id=6f7ae3e5-9031-4c53-b2bc-bc9389b536ba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.046503974Z" level=info msg="Removed pod sandbox: 27dd9d90ed5b0a103ef704383dcfd75ab5fcd20d735b21bec9a04cd94df73436" id=6f7ae3e5-9031-4c53-b2bc-bc9389b536ba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.047177042Z" level=info msg="Stopping pod sandbox: ab24a99177fcb25df9b4f2aa7b21bb33da15e27a3b0196d90699767deebfc513" id=1a09ad52-e2da-42f7-9ba9-85193d47a3b1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.047218159Z" level=info msg="Stopped pod sandbox (already stopped): ab24a99177fcb25df9b4f2aa7b21bb33da15e27a3b0196d90699767deebfc513" id=1a09ad52-e2da-42f7-9ba9-85193d47a3b1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.047593549Z" level=info msg="Removing pod sandbox: ab24a99177fcb25df9b4f2aa7b21bb33da15e27a3b0196d90699767deebfc513" id=79758164-e8c7-496e-abcf-062900bf2487 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.059240443Z" level=info msg="Removed pod sandbox: ab24a99177fcb25df9b4f2aa7b21bb33da15e27a3b0196d90699767deebfc513" id=79758164-e8c7-496e-abcf-062900bf2487 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.596396139Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=05d48e0a-3751-4792-ae72-556e4ab80b98 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.596638884Z" level=info msg="Image docker.io/nginx:alpine not found" id=05d48e0a-3751-4792-ae72-556e4ab80b98 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.597125299Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=db89d4c9-9780-47f8-a02d-b2c1c1cbe349 name=/runtime.v1.ImageService/PullImage
	Sep 08 13:52:08 addons-242829 crio[983]: time="2025-09-08 13:52:08.599634178Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 08 13:52:53 addons-242829 crio[983]: time="2025-09-08 13:52:53.596150063Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b93c6500-15bb-4070-bc31-4014b503c905 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:52:53 addons-242829 crio[983]: time="2025-09-08 13:52:53.596421780Z" level=info msg="Image docker.io/nginx:alpine not found" id=b93c6500-15bb-4070-bc31-4014b503c905 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:53:04 addons-242829 crio[983]: time="2025-09-08 13:53:04.595804615Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d8d17845-ed07-4b22-9e4a-32b315be610e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:53:04 addons-242829 crio[983]: time="2025-09-08 13:53:04.596534743Z" level=info msg="Image docker.io/nginx:alpine not found" id=d8d17845-ed07-4b22-9e4a-32b315be610e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:53:18 addons-242829 crio[983]: time="2025-09-08 13:53:18.596244893Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fd3abb3d-6ba8-41ff-b177-bacbb6ce2a64 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:53:18 addons-242829 crio[983]: time="2025-09-08 13:53:18.596474919Z" level=info msg="Image docker.io/nginx:alpine not found" id=fd3abb3d-6ba8-41ff-b177-bacbb6ce2a64 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:53:30 addons-242829 crio[983]: time="2025-09-08 13:53:30.595855843Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=de20ce0d-9855-4fea-a8bd-48aae53f3c13 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:53:30 addons-242829 crio[983]: time="2025-09-08 13:53:30.596100253Z" level=info msg="Image docker.io/nginx:alpine not found" id=de20ce0d-9855-4fea-a8bd-48aae53f3c13 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:53:43 addons-242829 crio[983]: time="2025-09-08 13:53:43.596352223Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fc757970-c860-4538-9bf2-f321a6ea2773 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:53:43 addons-242829 crio[983]: time="2025-09-08 13:53:43.596576489Z" level=info msg="Image docker.io/nginx:alpine not found" id=fc757970-c860-4538-9bf2-f321a6ea2773 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a30f95fe1632f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          10 minutes ago      Running             busybox                   0                   a2374c9814eda       busybox
	c0e2fa2396b15       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             10 minutes ago      Running             controller                0                   3acaa6eaaebe1       ingress-nginx-controller-9cc49f96f-6crd5
	59980ed3186f7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:b3f8a40cecf84afd8a5299442eab04c52f913ef6194e01dc4fbeb783f9d42c58            10 minutes ago      Running             gadget                    0                   2755b3771e4ac       gadget-kldqv
	91177dbb04658       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                             10 minutes ago      Exited              patch                     3                   64b95056d642c       ingress-nginx-admission-patch-9t8cq
	e6beed061bfe6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   10 minutes ago      Exited              create                    0                   c88791b648b02       ingress-nginx-admission-create-4f2x7
	e4c12d80a2f80       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958               11 minutes ago      Running             minikube-ingress-dns      0                   56252405e86a5       kube-ingress-dns-minikube
	fc30762357fd6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             11 minutes ago      Running             storage-provisioner       0                   ffc810ffc2458       storage-provisioner
	5f02537abeb2e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                             11 minutes ago      Running             coredns                   0                   a5c6442ae7d3b       coredns-66bc5c9577-gnj8l
	8a649dce93b09       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                             12 minutes ago      Running             kindnet-cni               0                   ac2934957e247       kindnet-hkztp
	f8d8b89978600       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                             12 minutes ago      Running             kube-proxy                0                   061e17e5b84dc       kube-proxy-sn2lk
	02451464eed55       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                                             12 minutes ago      Running             kube-apiserver            0                   79b0ac473e4af       kube-apiserver-addons-242829
	be8406ca5c1ed       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                             12 minutes ago      Running             kube-scheduler            0                   f69d79fcbdf4d       kube-scheduler-addons-242829
	4c1bb6d039fd7       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                             12 minutes ago      Running             kube-controller-manager   0                   6b16ee517ae64       kube-controller-manager-addons-242829
	04c1e12420631       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                             12 minutes ago      Running             etcd                      0                   a184ebc732186       etcd-addons-242829
	
	
	==> coredns [5f02537abeb2ea5962133ab212d003231638185d7c37aac98e0dca0d6cffaf4a] <==
	[INFO] 10.244.0.16:36903 - 18941 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002481748s
	[INFO] 10.244.0.16:36903 - 60209 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000272317s
	[INFO] 10.244.0.16:36903 - 14947 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000234769s
	[INFO] 10.244.0.16:35234 - 9101 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000207471s
	[INFO] 10.244.0.16:35234 - 9331 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.002899256s
	[INFO] 10.244.0.16:50633 - 10263 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00013606s
	[INFO] 10.244.0.16:50633 - 10740 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000130669s
	[INFO] 10.244.0.16:48023 - 56314 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126213s
	[INFO] 10.244.0.16:48023 - 56503 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000150435s
	[INFO] 10.244.0.16:39062 - 26032 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001682981s
	[INFO] 10.244.0.16:39062 - 25844 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001749042s
	[INFO] 10.244.0.16:34404 - 25315 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137184s
	[INFO] 10.244.0.16:34404 - 25487 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000150706s
	[INFO] 10.244.0.21:60704 - 24519 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000196541s
	[INFO] 10.244.0.21:41264 - 15606 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000088214s
	[INFO] 10.244.0.21:59090 - 31906 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126689s
	[INFO] 10.244.0.21:40895 - 42869 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081051s
	[INFO] 10.244.0.21:58684 - 59770 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000180975s
	[INFO] 10.244.0.21:43504 - 19377 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095436s
	[INFO] 10.244.0.21:40246 - 50157 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002048994s
	[INFO] 10.244.0.21:50913 - 18304 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002595045s
	[INFO] 10.244.0.21:49429 - 26048 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001926038s
	[INFO] 10.244.0.21:37250 - 24804 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002155621s
	[INFO] 10.244.0.24:35998 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000195925s
	[INFO] 10.244.0.24:51410 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130923s
	
	
	==> describe nodes <==
	Name:               addons-242829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-242829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6
	                    minikube.k8s.io/name=addons-242829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_41_08_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-242829
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:41:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-242829
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 13:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:51:00 +0000   Mon, 08 Sep 2025 13:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:51:00 +0000   Mon, 08 Sep 2025 13:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:51:00 +0000   Mon, 08 Sep 2025 13:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:51:00 +0000   Mon, 08 Sep 2025 13:41:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-242829
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9a623c9b5844688a0a430e9b7ac547a
	  System UUID:                c25e32cb-1ec5-4151-a0b3-c6ef8f1dc70a
	  Boot ID:                    3b69f852-7505-47f7-82de-581d66319e23
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  gadget                      gadget-kldqv                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-6crd5    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-gnj8l                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-242829                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-hkztp                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-addons-242829                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-242829       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-sn2lk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-242829                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-242829 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-242829 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-242829 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-242829 event: Registered Node addons-242829 in Controller
	  Normal   NodeReady                11m   kubelet          Node addons-242829 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 8 12:35] FS-Cache: Duplicate cookie detected
	[  +0.000684] FS-Cache: O-cookie c=00000025 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000907] FS-Cache: O-cookie d=00000000f75621f8{9P.session} n=000000002e0501ee
	[  +0.001029] FS-Cache: O-key=[10] '34323936393639353436'
	[  +0.000727] FS-Cache: N-cookie c=00000026 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000883] FS-Cache: N-cookie d=00000000f75621f8{9P.session} n=00000000ccfa13d2
	[  +0.001067] FS-Cache: N-key=[10] '34323936393639353436'
	[Sep 8 13:39] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [04c1e124206317e3618b751b13c0f3162ddfe05b2ce4e95d253df73faf85f04e] <==
	{"level":"info","ts":"2025-09-08T13:41:16.596691Z","caller":"traceutil/trace.go:172","msg":"trace[1413570816] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"132.7297ms","start":"2025-09-08T13:41:16.463946Z","end":"2025-09-08T13:41:16.596675Z","steps":["trace[1413570816] 'process raft request'  (duration: 132.553091ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:41:16.598886Z","caller":"traceutil/trace.go:172","msg":"trace[1257846946] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"138.285618ms","start":"2025-09-08T13:41:16.460565Z","end":"2025-09-08T13:41:16.598851Z","steps":["trace[1257846946] 'process raft request'  (duration: 135.584755ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:41:18.247893Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.134096ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128039831268646847 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/gadget/gadget\" mod_revision:555 > success:<request_put:<key:\"/registry/daemonsets/gadget/gadget\" value_size:10012 >> failure:<request_range:<key:\"/registry/daemonsets/gadget/gadget\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-08T13:41:18.249339Z","caller":"traceutil/trace.go:172","msg":"trace[792231134] linearizableReadLoop","detail":"{readStateIndex:589; appliedIndex:588; }","duration":"152.437683ms","start":"2025-09-08T13:41:18.095808Z","end":"2025-09-08T13:41:18.248246Z","steps":["trace[792231134] 'read index received'  (duration: 8.356252ms)","trace[792231134] 'applied index is now lower than readState.Index'  (duration: 144.015346ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T13:41:18.249583Z","caller":"traceutil/trace.go:172","msg":"trace[450647026] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"161.628958ms","start":"2025-09-08T13:41:18.087939Z","end":"2025-09-08T13:41:18.249568Z","steps":["trace[450647026] 'compare'  (duration: 84.888312ms)","trace[450647026] 'store kv pair into bolt db' {req_type:put; key:/registry/daemonsets/gadget/gadget; req_size:10051; } (duration: 51.155691ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T13:41:18.272116Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.285211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/ingress-nginx\" limit:1 ","response":"range_response_count:1 size:849"}
	{"level":"info","ts":"2025-09-08T13:41:18.272228Z","caller":"traceutil/trace.go:172","msg":"trace[307411211] range","detail":"{range_begin:/registry/namespaces/ingress-nginx; range_end:; response_count:1; response_revision:579; }","duration":"176.387456ms","start":"2025-09-08T13:41:18.095797Z","end":"2025-09-08T13:41:18.272184Z","steps":["trace[307411211] 'agreement among raft nodes before linearized reading'  (duration: 154.458814ms)","trace[307411211] 'range keys from in-memory index tree'  (duration: 21.679539ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T13:41:18.281186Z","caller":"traceutil/trace.go:172","msg":"trace[1453730920] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"193.37504ms","start":"2025-09-08T13:41:18.087737Z","end":"2025-09-08T13:41:18.281112Z","steps":["trace[1453730920] 'process raft request'  (duration: 160.436252ms)","trace[1453730920] 'compare'  (duration: 18.038142ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T13:41:18.281866Z","caller":"traceutil/trace.go:172","msg":"trace[244369698] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"152.527654ms","start":"2025-09-08T13:41:18.129329Z","end":"2025-09-08T13:41:18.281856Z","steps":["trace[244369698] 'process raft request'  (duration: 146.286336ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:41:18.282122Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.857234ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/csi-hostpathplugin-provisioner-role\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-08T13:41:18.282186Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.894863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-kldqv\" limit:1 ","response":"range_response_count:1 size:7157"}
	{"level":"info","ts":"2025-09-08T13:41:18.282246Z","caller":"traceutil/trace.go:172","msg":"trace[446899083] range","detail":"{range_begin:/registry/pods/gadget/gadget-kldqv; range_end:; response_count:1; response_revision:584; }","duration":"152.950995ms","start":"2025-09-08T13:41:18.129277Z","end":"2025-09-08T13:41:18.282228Z","steps":["trace[446899083] 'agreement among raft nodes before linearized reading'  (duration: 152.777183ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:41:18.282221Z","caller":"traceutil/trace.go:172","msg":"trace[436446437] range","detail":"{range_begin:/registry/rolebindings/kube-system/csi-hostpathplugin-provisioner-role; range_end:; response_count:0; response_revision:584; }","duration":"152.968251ms","start":"2025-09-08T13:41:18.129243Z","end":"2025-09-08T13:41:18.282211Z","steps":["trace[436446437] 'agreement among raft nodes before linearized reading'  (duration: 152.828212ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:41:18.282409Z","caller":"traceutil/trace.go:172","msg":"trace[437679061] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"135.884156ms","start":"2025-09-08T13:41:18.146515Z","end":"2025-09-08T13:41:18.282399Z","steps":["trace[437679061] 'process raft request'  (duration: 135.132499ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:41:18.282540Z","caller":"traceutil/trace.go:172","msg":"trace[1490017462] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"109.56516ms","start":"2025-09-08T13:41:18.172962Z","end":"2025-09-08T13:41:18.282527Z","steps":["trace[1490017462] 'process raft request'  (duration: 108.721825ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:41:18.282814Z","caller":"traceutil/trace.go:172","msg":"trace[480968927] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"106.814459ms","start":"2025-09-08T13:41:18.175990Z","end":"2025-09-08T13:41:18.282805Z","steps":["trace[480968927] 'process raft request'  (duration: 105.761291ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:41:20.034717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:20.050609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:41.786656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:41.814066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:41.853002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:41.881660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50890","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T13:51:02.158866Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1859}
	{"level":"info","ts":"2025-09-08T13:51:02.187809Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1859,"took":"28.073144ms","hash":63255730,"current-db-size-bytes":5959680,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3731456,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2025-09-08T13:51:02.187871Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":63255730,"revision":1859,"compact-revision":-1}
	
	
	==> kernel <==
	 13:53:43 up  3:36,  0 users,  load average: 0.43, 0.99, 2.44
	Linux addons-242829 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [8a649dce93b099bfe465ca8bf631a3ab9344d9ecd2851723430c54a461799248] <==
	I0908 13:51:35.995029       1 main.go:301] handling current node
	I0908 13:51:45.995733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:51:45.995766       1 main.go:301] handling current node
	I0908 13:51:55.995372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:51:55.995404       1 main.go:301] handling current node
	I0908 13:52:05.994956       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:52:05.995010       1 main.go:301] handling current node
	I0908 13:52:15.994752       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:52:15.994785       1 main.go:301] handling current node
	I0908 13:52:25.995552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:52:25.995587       1 main.go:301] handling current node
	I0908 13:52:35.994945       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:52:35.994978       1 main.go:301] handling current node
	I0908 13:52:45.994776       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:52:45.994812       1 main.go:301] handling current node
	I0908 13:52:55.995677       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:52:55.995710       1 main.go:301] handling current node
	I0908 13:53:05.995035       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:53:05.995070       1 main.go:301] handling current node
	I0908 13:53:15.994743       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:53:15.994776       1 main.go:301] handling current node
	I0908 13:53:25.995586       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:53:25.995647       1 main.go:301] handling current node
	I0908 13:53:35.995311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:53:35.995344       1 main.go:301] handling current node
	
	
	==> kube-apiserver [02451464eed55f46245467264da9f81e893ffaba41ab356a8b58aeb45dd426cd] <==
	I0908 13:48:12.850381       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:48:43.199073       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:49:28.656018       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:49:59.970148       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:50:41.654127       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:51:04.776555       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:51:15.794570       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:51:29.300464       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:51:29.301994       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 13:51:29.329549       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:51:29.329678       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 13:51:29.337236       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:51:29.337355       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 13:51:29.361030       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:51:29.362265       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 13:51:29.443377       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 13:51:29.443420       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0908 13:51:30.209498       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-attacher\" not found]"
	W0908 13:51:30.337394       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0908 13:51:30.444407       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0908 13:51:30.473463       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0908 13:51:43.399719       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:52:22.120352       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:52:57.361967       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:53:36.679639       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [4c1bb6d039fd7710a8f3509fee628e1fd01910f45fd2910bb0a58b87a7862478] <==
	E0908 13:51:45.087767       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:51:45.089327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:51:49.812521       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:51:49.813589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:51:49.868116       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:51:49.869224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:52:05.308043       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:52:05.309275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:52:10.151978       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:52:10.153149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:52:12.995493       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:52:12.996676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0908 13:52:15.651316       1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^1210636e-8cba-11f0-9c72-b6c018995cef" nodeName="addons-242829" scheduledPods=["default/task-pv-pod"]
	E0908 13:52:39.931888       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:52:39.932933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:52:45.200407       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:52:45.202034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:52:51.104550       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:52:51.105804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:53:16.279256       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:53:16.280430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:53:37.007743       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:53:37.009176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 13:53:37.031016       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 13:53:37.032172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [f8d8b89978600a891f8c9d71e92fb37d8c941ffb4c991b43cd45a709b66d1678] <==
	I0908 13:41:18.069957       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:41:18.616656       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:41:18.840155       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:41:18.841296       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 13:41:18.847631       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:41:19.351869       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:41:19.352004       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:41:19.359683       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:41:19.360098       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:41:19.360300       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:41:19.362144       1 config.go:200] "Starting service config controller"
	I0908 13:41:19.362210       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:41:19.362253       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:41:19.362282       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:41:19.362333       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:41:19.362361       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:41:19.363021       1 config.go:309] "Starting node config controller"
	I0908 13:41:19.363073       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:41:19.363106       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:41:19.464639       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:41:19.472552       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:41:19.472599       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [be8406ca5c1ed513a426f97c0170a6d3e01685f7aea08216f53df76eda0a9eef] <==
	I0908 13:41:05.711869       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:41:05.716382       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:41:05.716439       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:41:05.717258       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:41:05.717499       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0908 13:41:05.730813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0908 13:41:05.740293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 13:41:05.740375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:41:05.740423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 13:41:05.740473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 13:41:05.740521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 13:41:05.740484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 13:41:05.740567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 13:41:05.740627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 13:41:05.740664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 13:41:05.740770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 13:41:05.740790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 13:41:05.740832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 13:41:05.741398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 13:41:05.745418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 13:41:05.745556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 13:41:05.745616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 13:41:05.746443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:41:05.747317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0908 13:41:06.816715       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 13:52:57 addons-242829 kubelet[1528]: E0908 13:52:57.945927    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339577945618322 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:53:00 addons-242829 kubelet[1528]: I0908 13:53:00.595735    1528 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 13:53:02 addons-242829 kubelet[1528]: E0908 13:53:02.596002    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="50b60ff4-d270-4f50-8369-f843eaa5c9bf"
	Sep 08 13:53:04 addons-242829 kubelet[1528]: E0908 13:53:04.596886    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9e4cf025-f78d-404a-99ec-0372996d1d4e"
	Sep 08 13:53:07 addons-242829 kubelet[1528]: E0908 13:53:07.739242    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b9cca13a06f224940c2573b78ad502a933cf22ce10b7df96c1554e90b6d34cac/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b9cca13a06f224940c2573b78ad502a933cf22ce10b7df96c1554e90b6d34cac/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:53:07 addons-242829 kubelet[1528]: E0908 13:53:07.739291    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8bb166bbaa2d92a3ef1cb31e7e3e188d1c666a47fc90f53a8459525d935a01ca/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8bb166bbaa2d92a3ef1cb31e7e3e188d1c666a47fc90f53a8459525d935a01ca/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:53:07 addons-242829 kubelet[1528]: E0908 13:53:07.740793    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c559642c1ab3629d455f592524fe421503a8c3bdf824301cc586702ae4b07a32/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c559642c1ab3629d455f592524fe421503a8c3bdf824301cc586702ae4b07a32/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:53:07 addons-242829 kubelet[1528]: E0908 13:53:07.751411    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7844083b1d8ae47c01481fda08f4724914306d4585ab5f79423924b0735e04ca/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7844083b1d8ae47c01481fda08f4724914306d4585ab5f79423924b0735e04ca/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:53:07 addons-242829 kubelet[1528]: E0908 13:53:07.782448    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/51813e44d42338f62fedf0e3efe57145196b6d706931f504f2390faeed56da26/diff" to get inode usage: stat /var/lib/containers/storage/overlay/51813e44d42338f62fedf0e3efe57145196b6d706931f504f2390faeed56da26/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:53:07 addons-242829 kubelet[1528]: E0908 13:53:07.783560    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0b2a6f8f6b830da5baa587f68cdf29333c728605f5860fbb380cebbd13028283/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0b2a6f8f6b830da5baa587f68cdf29333c728605f5860fbb380cebbd13028283/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:53:07 addons-242829 kubelet[1528]: E0908 13:53:07.783577    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b9cca13a06f224940c2573b78ad502a933cf22ce10b7df96c1554e90b6d34cac/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b9cca13a06f224940c2573b78ad502a933cf22ce10b7df96c1554e90b6d34cac/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:53:07 addons-242829 kubelet[1528]: E0908 13:53:07.949117    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339587948863300 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:53:07 addons-242829 kubelet[1528]: E0908 13:53:07.949155    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339587948863300 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:53:17 addons-242829 kubelet[1528]: E0908 13:53:17.596913    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="50b60ff4-d270-4f50-8369-f843eaa5c9bf"
	Sep 08 13:53:17 addons-242829 kubelet[1528]: E0908 13:53:17.952139    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339597951848644 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:53:17 addons-242829 kubelet[1528]: E0908 13:53:17.952178    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339597951848644 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:53:18 addons-242829 kubelet[1528]: E0908 13:53:18.596764    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9e4cf025-f78d-404a-99ec-0372996d1d4e"
	Sep 08 13:53:27 addons-242829 kubelet[1528]: E0908 13:53:27.954721    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339607954458506 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:53:27 addons-242829 kubelet[1528]: E0908 13:53:27.954756    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339607954458506 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:53:30 addons-242829 kubelet[1528]: E0908 13:53:30.595684    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="50b60ff4-d270-4f50-8369-f843eaa5c9bf"
	Sep 08 13:53:30 addons-242829 kubelet[1528]: E0908 13:53:30.597176    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9e4cf025-f78d-404a-99ec-0372996d1d4e"
	Sep 08 13:53:37 addons-242829 kubelet[1528]: E0908 13:53:37.957395    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339617957116018 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:53:37 addons-242829 kubelet[1528]: E0908 13:53:37.957432    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339617957116018 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:53:43 addons-242829 kubelet[1528]: E0908 13:53:43.595351    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="50b60ff4-d270-4f50-8369-f843eaa5c9bf"
	Sep 08 13:53:43 addons-242829 kubelet[1528]: E0908 13:53:43.597238    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9e4cf025-f78d-404a-99ec-0372996d1d4e"
	
	
	==> storage-provisioner [fc30762357fd66cdc91ee738b3b766c1a664d382b7c7cd12d9aa361de75a3055] <==
	W0908 13:53:19.362770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:21.365765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:21.372457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:23.375798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:23.380218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:25.383172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:25.387769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:27.391343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:27.395982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:29.398489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:29.405387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:31.409168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:31.413805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:33.416529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:33.421161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:35.424162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:35.428916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:37.432188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:37.438725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:39.441706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:39.446172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:41.449092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:41.453727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:43.460650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:53:43.469042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-242829 -n addons-242829
helpers_test.go:269: (dbg) Run:  kubectl --context addons-242829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-4f2x7 ingress-nginx-admission-patch-9t8cq
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-242829 describe pod nginx task-pv-pod ingress-nginx-admission-create-4f2x7 ingress-nginx-admission-patch-9t8cq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-242829 describe pod nginx task-pv-pod ingress-nginx-admission-create-4f2x7 ingress-nginx-admission-patch-9t8cq: exit status 1 (165.594393ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-242829/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:45:41 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kp6lf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kp6lf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  8m4s                default-scheduler  Successfully assigned default/nginx to addons-242829
	  Warning  Failed     5m34s               kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    97s (x5 over 8m3s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     67s (x4 over 7m5s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     67s (x5 over 7m5s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x15 over 7m4s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2s (x15 over 7m4s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-242829/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:45:25 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wk9js (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-wk9js:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m20s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-242829
	  Warning  Failed     5m4s (x2 over 6m34s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m15s (x5 over 8m20s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     105s (x3 over 7m35s)   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     105s (x5 over 7m35s)   kubelet            Error: ErrImagePull
	  Warning  Failed     43s (x16 over 7m35s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x19 over 7m35s)    kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4f2x7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9t8cq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-242829 describe pod nginx task-pv-pod ingress-nginx-admission-create-4f2x7 ingress-nginx-admission-patch-9t8cq: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-242829 addons disable ingress-dns --alsologtostderr -v=1: (1.535460366s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-242829 addons disable ingress --alsologtostderr -v=1: (7.823763866s)
--- FAIL: TestAddons/parallel/Ingress (493.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (391.44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 13:45:05.101160  707279 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 13:45:05.109599  707279 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 13:45:05.109629  707279 kapi.go:107] duration metric: took 8.482359ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.491787ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-242829 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-242829 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [50b60ff4-d270-4f50-8369-f843eaa5c9bf] Pending
helpers_test.go:352: "task-pv-pod" [50b60ff4-d270-4f50-8369-f843eaa5c9bf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-242829 -n addons-242829
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-09-08 13:51:25.854876474 +0000 UTC m=+684.130128367
addons_test.go:567: (dbg) Run:  kubectl --context addons-242829 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-242829 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-242829/192.168.49.2
Start Time:       Mon, 08 Sep 2025 13:45:25 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wk9js (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-wk9js:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/task-pv-pod to addons-242829
Warning  Failed     2m44s (x2 over 4m14s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    109s (x4 over 6m)      kubelet            Pulling image "docker.io/nginx"
Warning  Failed     79s (x2 over 5m15s)    kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     79s (x4 over 5m15s)    kubelet            Error: ErrImagePull
Normal   BackOff    6s (x11 over 5m15s)    kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     6s (x11 over 5m15s)    kubelet            Error: ImagePullBackOff
addons_test.go:567: (dbg) Run:  kubectl --context addons-242829 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-242829 logs task-pv-pod -n default: exit status 1 (100.167902ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-242829 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-242829
helpers_test.go:243: (dbg) docker inspect addons-242829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21",
	        "Created": "2025-09-08T13:40:42.8963652Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 708440,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:40:42.965703856Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21/hostname",
	        "HostsPath": "/var/lib/docker/containers/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21/hosts",
	        "LogPath": "/var/lib/docker/containers/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21-json.log",
	        "Name": "/addons-242829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-242829:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-242829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21",
	                "LowerDir": "/var/lib/docker/overlay2/aca94e5e464b3c3166a1c4b37d3516470c5d694c60e9ecd6fc7d40661c35b53a-init/diff:/var/lib/docker/overlay2/e952efc7b05509e28dc878e8f391bd641c7ca24b818c7ca2dd0e02dcca8c51fa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aca94e5e464b3c3166a1c4b37d3516470c5d694c60e9ecd6fc7d40661c35b53a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aca94e5e464b3c3166a1c4b37d3516470c5d694c60e9ecd6fc7d40661c35b53a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aca94e5e464b3c3166a1c4b37d3516470c5d694c60e9ecd6fc7d40661c35b53a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-242829",
	                "Source": "/var/lib/docker/volumes/addons-242829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-242829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-242829",
	                "name.minikube.sigs.k8s.io": "addons-242829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9bf24bd76272dae654cf1fd6dab3567c143204d67c70e508f804bdc80ddb0517",
	            "SandboxKey": "/var/run/docker/netns/9bf24bd76272",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-242829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:29:e5:cf:87:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6d881d6faaefe4fae832a05ea66173e3cd8809d0e3fbd480724f018538b9c522",
	                    "EndpointID": "b31124decc38e71bd3c27193d8195834d2994d82cbeecde2a81225a417dd9f52",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-242829",
	                        "3fa9656c3a03"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-242829 -n addons-242829
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-242829 logs -n 25: (1.66095926s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-903641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-903641   │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ start   │ -o=json --download-only -p download-only-445998 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-445998   │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ delete  │ -p download-only-445998                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-445998   │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ delete  │ -p download-only-903641                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-903641   │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ delete  │ -p download-only-445998                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-445998   │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ start   │ --download-only -p download-docker-833805 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-833805 │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │                     │
	│ delete  │ -p download-docker-833805                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-833805 │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ start   │ --download-only -p binary-mirror-025043 --alsologtostderr --binary-mirror http://127.0.0.1:39871 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-025043   │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │                     │
	│ delete  │ -p binary-mirror-025043                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-025043   │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ addons  │ enable dashboard -p addons-242829                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │                     │
	│ addons  │ disable dashboard -p addons-242829                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │                     │
	│ start   │ -p addons-242829 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:43 UTC │
	│ addons  │ addons-242829 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:43 UTC │ 08 Sep 25 13:43 UTC │
	│ addons  │ addons-242829 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:43 UTC │ 08 Sep 25 13:43 UTC │
	│ addons  │ enable headlamp -p addons-242829 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:43 UTC │ 08 Sep 25 13:43 UTC │
	│ ip      │ addons-242829 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ ssh     │ addons-242829 ssh cat /opt/local-path-provisioner/pvc-2fff7e83-b238-4284-875d-267426c27321_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:44 UTC │
	│ addons  │ addons-242829 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:44 UTC │ 08 Sep 25 13:45 UTC │
	│ addons  │ addons-242829 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-242829          │ jenkins │ v1.36.0 │ 08 Sep 25 13:45 UTC │ 08 Sep 25 13:45 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:40:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:40:17.611835  708040 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:40:17.612055  708040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:40:17.612105  708040 out.go:374] Setting ErrFile to fd 2...
	I0908 13:40:17.612130  708040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:40:17.612420  708040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 13:40:17.612888  708040 out.go:368] Setting JSON to false
	I0908 13:40:17.613741  708040 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12168,"bootTime":1757326650,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 13:40:17.613840  708040 start.go:140] virtualization:  
	I0908 13:40:17.617183  708040 out.go:179] * [addons-242829] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:40:17.620176  708040 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 13:40:17.620263  708040 notify.go:220] Checking for updates...
	I0908 13:40:17.625846  708040 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:40:17.628768  708040 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 13:40:17.631614  708040 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	I0908 13:40:17.634403  708040 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:40:17.637281  708040 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:40:17.640400  708040 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:40:17.662527  708040 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:40:17.662638  708040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:40:17.726885  708040 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-08 13:40:17.717676728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:40:17.726984  708040 docker.go:318] overlay module found
	I0908 13:40:17.730066  708040 out.go:179] * Using the docker driver based on user configuration
	I0908 13:40:17.732841  708040 start.go:304] selected driver: docker
	I0908 13:40:17.732863  708040 start.go:918] validating driver "docker" against <nil>
	I0908 13:40:17.732877  708040 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:40:17.733612  708040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:40:17.796743  708040 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-08 13:40:17.787818972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:40:17.796912  708040 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:40:17.797152  708040 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:40:17.800153  708040 out.go:179] * Using Docker driver with root privileges
	I0908 13:40:17.802944  708040 cni.go:84] Creating CNI manager for ""
	I0908 13:40:17.803023  708040 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:40:17.803033  708040 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 13:40:17.803111  708040 start.go:348] cluster config:
	{Name:addons-242829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-242829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0908 13:40:17.808034  708040 out.go:179] * Starting "addons-242829" primary control-plane node in "addons-242829" cluster
	I0908 13:40:17.810905  708040 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 13:40:17.813802  708040 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:40:17.816589  708040 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:40:17.816660  708040 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:40:17.816691  708040 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 13:40:17.816703  708040 cache.go:58] Caching tarball of preloaded images
	I0908 13:40:17.816781  708040 preload.go:172] Found /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0908 13:40:17.816794  708040 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 13:40:17.817138  708040 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/config.json ...
	I0908 13:40:17.817165  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/config.json: {Name:mk56726e796e1bf536ba1733dd837574e12d14da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:17.832827  708040 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:40:17.832978  708040 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 13:40:17.833003  708040 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 13:40:17.833013  708040 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 13:40:17.833021  708040 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 13:40:17.833032  708040 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from local cache
	I0908 13:40:35.618104  708040 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from cached tarball
	I0908 13:40:35.618146  708040 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:40:35.618186  708040 start.go:360] acquireMachinesLock for addons-242829: {Name:mk34f8a98d9e1dfddf6985bfb6f0c7f44da9593d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:40:35.619048  708040 start.go:364] duration metric: took 830.478µs to acquireMachinesLock for "addons-242829"
	I0908 13:40:35.619098  708040 start.go:93] Provisioning new machine with config: &{Name:addons-242829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-242829 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 13:40:35.619176  708040 start.go:125] createHost starting for "" (driver="docker")
	I0908 13:40:35.622506  708040 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0908 13:40:35.622755  708040 start.go:159] libmachine.API.Create for "addons-242829" (driver="docker")
	I0908 13:40:35.622800  708040 client.go:168] LocalClient.Create starting
	I0908 13:40:35.622921  708040 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem
	I0908 13:40:36.058437  708040 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem
	I0908 13:40:36.334678  708040 cli_runner.go:164] Run: docker network inspect addons-242829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 13:40:36.351083  708040 cli_runner.go:211] docker network inspect addons-242829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 13:40:36.351181  708040 network_create.go:284] running [docker network inspect addons-242829] to gather additional debugging logs...
	I0908 13:40:36.351204  708040 cli_runner.go:164] Run: docker network inspect addons-242829
	W0908 13:40:36.368601  708040 cli_runner.go:211] docker network inspect addons-242829 returned with exit code 1
	I0908 13:40:36.368636  708040 network_create.go:287] error running [docker network inspect addons-242829]: docker network inspect addons-242829: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-242829 not found
	I0908 13:40:36.368650  708040 network_create.go:289] output of [docker network inspect addons-242829]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-242829 not found
	
	** /stderr **
	I0908 13:40:36.368793  708040 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:40:36.385516  708040 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400194de40}
	I0908 13:40:36.385567  708040 network_create.go:124] attempt to create docker network addons-242829 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0908 13:40:36.385629  708040 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-242829 addons-242829
	I0908 13:40:36.445674  708040 network_create.go:108] docker network addons-242829 192.168.49.0/24 created
	I0908 13:40:36.445712  708040 kic.go:121] calculated static IP "192.168.49.2" for the "addons-242829" container
	I0908 13:40:36.445807  708040 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 13:40:36.464418  708040 cli_runner.go:164] Run: docker volume create addons-242829 --label name.minikube.sigs.k8s.io=addons-242829 --label created_by.minikube.sigs.k8s.io=true
	I0908 13:40:36.482924  708040 oci.go:103] Successfully created a docker volume addons-242829
	I0908 13:40:36.483016  708040 cli_runner.go:164] Run: docker run --rm --name addons-242829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-242829 --entrypoint /usr/bin/test -v addons-242829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 13:40:38.605068  708040 cli_runner.go:217] Completed: docker run --rm --name addons-242829-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-242829 --entrypoint /usr/bin/test -v addons-242829:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib: (2.121993809s)
	I0908 13:40:38.605099  708040 oci.go:107] Successfully prepared a docker volume addons-242829
	I0908 13:40:38.605128  708040 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:40:38.605147  708040 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 13:40:38.605212  708040 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-242829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 13:40:42.828113  708040 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-242829:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.222833461s)
	I0908 13:40:42.828144  708040 kic.go:203] duration metric: took 4.222992823s to extract preloaded images to volume ...
	W0908 13:40:42.828292  708040 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 13:40:42.828396  708040 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 13:40:42.881763  708040 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-242829 --name addons-242829 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-242829 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-242829 --network addons-242829 --ip 192.168.49.2 --volume addons-242829:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 13:40:43.195827  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Running}}
	I0908 13:40:43.217647  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:40:43.244837  708040 cli_runner.go:164] Run: docker exec addons-242829 stat /var/lib/dpkg/alternatives/iptables
	I0908 13:40:43.299790  708040 oci.go:144] the created container "addons-242829" has a running status.
	I0908 13:40:43.299817  708040 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa...
	I0908 13:40:43.632861  708040 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 13:40:43.662326  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:40:43.684683  708040 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 13:40:43.684702  708040 kic_runner.go:114] Args: [docker exec --privileged addons-242829 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 13:40:43.747014  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:40:43.775976  708040 machine.go:93] provisionDockerMachine start ...
	I0908 13:40:43.776094  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:43.803855  708040 main.go:141] libmachine: Using SSH client type: native
	I0908 13:40:43.804254  708040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0908 13:40:43.804269  708040 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:40:43.976484  708040 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-242829
	
	I0908 13:40:43.976549  708040 ubuntu.go:182] provisioning hostname "addons-242829"
	I0908 13:40:43.976634  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:44.002030  708040 main.go:141] libmachine: Using SSH client type: native
	I0908 13:40:44.002363  708040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0908 13:40:44.002376  708040 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-242829 && echo "addons-242829" | sudo tee /etc/hostname
	I0908 13:40:44.146344  708040 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-242829
	
	I0908 13:40:44.146493  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:44.164921  708040 main.go:141] libmachine: Using SSH client type: native
	I0908 13:40:44.165316  708040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0908 13:40:44.165341  708040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-242829' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-242829/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-242829' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:40:44.298058  708040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:40:44.298126  708040 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21504-705422/.minikube CaCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21504-705422/.minikube}
	I0908 13:40:44.298163  708040 ubuntu.go:190] setting up certificates
	I0908 13:40:44.298208  708040 provision.go:84] configureAuth start
	I0908 13:40:44.298291  708040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-242829
	I0908 13:40:44.316028  708040 provision.go:143] copyHostCerts
	I0908 13:40:44.316201  708040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/key.pem (1679 bytes)
	I0908 13:40:44.316352  708040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/ca.pem (1078 bytes)
	I0908 13:40:44.316424  708040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/cert.pem (1123 bytes)
	I0908 13:40:44.316470  708040 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem org=jenkins.addons-242829 san=[127.0.0.1 192.168.49.2 addons-242829 localhost minikube]
	I0908 13:40:44.470229  708040 provision.go:177] copyRemoteCerts
	I0908 13:40:44.470296  708040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:40:44.470346  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:44.491126  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:40:44.581210  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 13:40:44.604748  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 13:40:44.628212  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 13:40:44.652557  708040 provision.go:87] duration metric: took 354.306456ms to configureAuth
	I0908 13:40:44.652586  708040 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:40:44.652763  708040 config.go:182] Loaded profile config "addons-242829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:40:44.652870  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:44.669631  708040 main.go:141] libmachine: Using SSH client type: native
	I0908 13:40:44.670033  708040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0908 13:40:44.670057  708040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 13:40:44.903008  708040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 13:40:44.903031  708040 machine.go:96] duration metric: took 1.127035132s to provisionDockerMachine
	I0908 13:40:44.903041  708040 client.go:171] duration metric: took 9.280229822s to LocalClient.Create
	I0908 13:40:44.903063  708040 start.go:167] duration metric: took 9.280309568s to libmachine.API.Create "addons-242829"
	I0908 13:40:44.903073  708040 start.go:293] postStartSetup for "addons-242829" (driver="docker")
	I0908 13:40:44.903087  708040 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:40:44.903152  708040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:40:44.903197  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:44.921269  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:40:45.035812  708040 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:40:45.056825  708040 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:40:45.056861  708040 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:40:45.056872  708040 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:40:45.056880  708040 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:40:45.056900  708040 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-705422/.minikube/addons for local assets ...
	I0908 13:40:45.056999  708040 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-705422/.minikube/files for local assets ...
	I0908 13:40:45.057025  708040 start.go:296] duration metric: took 153.94671ms for postStartSetup
	I0908 13:40:45.057469  708040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-242829
	I0908 13:40:45.085826  708040 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/config.json ...
	I0908 13:40:45.086193  708040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:40:45.086261  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:45.109288  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:40:45.210165  708040 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:40:45.215815  708040 start.go:128] duration metric: took 9.596620623s to createHost
	I0908 13:40:45.215852  708040 start.go:83] releasing machines lock for "addons-242829", held for 9.596774807s
	I0908 13:40:45.215938  708040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-242829
	I0908 13:40:45.236155  708040 ssh_runner.go:195] Run: cat /version.json
	I0908 13:40:45.236216  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:45.236537  708040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:40:45.236628  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:40:45.263280  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:40:45.276048  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:40:45.491979  708040 ssh_runner.go:195] Run: systemctl --version
	I0908 13:40:45.496235  708040 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 13:40:45.638651  708040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:40:45.642903  708040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:40:45.665039  708040 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:40:45.665113  708040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:40:45.700243  708040 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 13:40:45.700272  708040 start.go:495] detecting cgroup driver to use...
	I0908 13:40:45.700334  708040 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:40:45.700400  708040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 13:40:45.716691  708040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 13:40:45.728648  708040 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:40:45.728755  708040 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:40:45.743811  708040 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:40:45.758626  708040 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:40:45.849328  708040 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:40:45.955368  708040 docker.go:234] disabling docker service ...
	I0908 13:40:45.955523  708040 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:40:45.976324  708040 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:40:45.988475  708040 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:40:46.080099  708040 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:40:46.185763  708040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:40:46.198562  708040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:40:46.214900  708040 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 13:40:46.215024  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.225669  708040 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 13:40:46.225742  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.235946  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.246450  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.257079  708040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:40:46.266506  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.276829  708040 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.293173  708040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:40:46.302879  708040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:40:46.311300  708040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:40:46.319915  708040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:40:46.409083  708040 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 13:40:46.517698  708040 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 13:40:46.517807  708040 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 13:40:46.521577  708040 start.go:563] Will wait 60s for crictl version
	I0908 13:40:46.521644  708040 ssh_runner.go:195] Run: which crictl
	I0908 13:40:46.525077  708040 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:40:46.560834  708040 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 13:40:46.560977  708040 ssh_runner.go:195] Run: crio --version
	I0908 13:40:46.599539  708040 ssh_runner.go:195] Run: crio --version
	I0908 13:40:46.644113  708040 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 13:40:46.646918  708040 cli_runner.go:164] Run: docker network inspect addons-242829 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:40:46.665938  708040 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 13:40:46.669670  708040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:40:46.680418  708040 kubeadm.go:875] updating cluster {Name:addons-242829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-242829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:40:46.680542  708040 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:40:46.680604  708040 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:40:46.766745  708040 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 13:40:46.766774  708040 crio.go:433] Images already preloaded, skipping extraction
	I0908 13:40:46.766834  708040 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:40:46.813615  708040 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 13:40:46.813652  708040 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:40:46.813662  708040 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0908 13:40:46.813793  708040 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-242829 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-242829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:40:46.813880  708040 ssh_runner.go:195] Run: crio config
	I0908 13:40:46.867613  708040 cni.go:84] Creating CNI manager for ""
	I0908 13:40:46.867639  708040 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:40:46.867649  708040 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:40:46.867673  708040 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-242829 NodeName:addons-242829 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:40:46.867814  708040 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-242829"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:40:46.867888  708040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:40:46.877181  708040 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:40:46.877289  708040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:40:46.886194  708040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0908 13:40:46.904960  708040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:40:46.922667  708040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 13:40:46.940731  708040 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:40:46.944114  708040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 13:40:46.954925  708040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:40:47.042616  708040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:40:47.056142  708040 certs.go:68] Setting up /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829 for IP: 192.168.49.2
	I0908 13:40:47.056200  708040 certs.go:194] generating shared ca certs ...
	I0908 13:40:47.056231  708040 certs.go:226] acquiring lock for ca certs: {Name:mka1c81276fce9cfa47987803f0bf5dea623834c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:47.056374  708040 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key
	I0908 13:40:47.437346  708040 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt ...
	I0908 13:40:47.437378  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt: {Name:mkd004c7069a772f6ed057c1d4b6ff0d25256d90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:47.438172  708040 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key ...
	I0908 13:40:47.438194  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key: {Name:mkf893a7dc40078ec8ea08a6cd5404c07e3a3fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:47.438865  708040 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key
	I0908 13:40:48.138423  708040 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.crt ...
	I0908 13:40:48.138455  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.crt: {Name:mkc0be6db70b1f098764e40b029d9241a4fe9ee9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:48.139220  708040 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key ...
	I0908 13:40:48.139236  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key: {Name:mk88915ba4aeda971a22e98cbaf73153c48ab75a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:48.139980  708040 certs.go:256] generating profile certs ...
	I0908 13:40:48.140051  708040 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.key
	I0908 13:40:48.140071  708040 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt with IP's: []
	I0908 13:40:49.291632  708040 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt ...
	I0908 13:40:49.291662  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: {Name:mk412c20027f8bcddad3314be6ec3dd40e89c15b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.291884  708040 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.key ...
	I0908 13:40:49.291899  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.key: {Name:mkb7f447e49003768e6456164072912d0911889a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.292569  708040 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key.1861509d
	I0908 13:40:49.292596  708040 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt.1861509d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0908 13:40:49.539881  708040 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt.1861509d ...
	I0908 13:40:49.539915  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt.1861509d: {Name:mk889859f9a587cbb40c3dba76b152e4bd4d2d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.540764  708040 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key.1861509d ...
	I0908 13:40:49.540786  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key.1861509d: {Name:mk861c6128d0c10c3a1580bffb4cc49964d2fd8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.541493  708040 certs.go:381] copying /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt.1861509d -> /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt
	I0908 13:40:49.541577  708040 certs.go:385] copying /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key.1861509d -> /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key
	I0908 13:40:49.541632  708040 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.key
	I0908 13:40:49.541653  708040 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.crt with IP's: []
	I0908 13:40:49.672690  708040 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.crt ...
	I0908 13:40:49.672724  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.crt: {Name:mk50703bc10335ebada97d3990082b193f0643fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.673517  708040 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.key ...
	I0908 13:40:49.673535  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.key: {Name:mk1e1b1f5fcc42e012687c527792308fe1e7f464 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:49.673745  708040 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 13:40:49.673786  708040 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem (1078 bytes)
	I0908 13:40:49.673815  708040 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:40:49.673846  708040 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem (1679 bytes)
	I0908 13:40:49.674495  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:40:49.698878  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 13:40:49.722454  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:40:49.746171  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 13:40:49.770523  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 13:40:49.795380  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0908 13:40:49.820549  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:40:49.844517  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 13:40:49.869862  708040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:40:49.894961  708040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:40:49.913574  708040 ssh_runner.go:195] Run: openssl version
	I0908 13:40:49.919267  708040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:40:49.928598  708040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:40:49.931987  708040 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:40 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:40:49.932052  708040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:40:49.939070  708040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:40:49.948278  708040 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:40:49.951690  708040 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 13:40:49.951739  708040 kubeadm.go:392] StartCluster: {Name:addons-242829 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-242829 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:40:49.951866  708040 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 13:40:49.951932  708040 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:40:49.990014  708040 cri.go:89] found id: ""
	I0908 13:40:49.990136  708040 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 13:40:49.999181  708040 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 13:40:50.009900  708040 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 13:40:50.010031  708040 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 13:40:50.021425  708040 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 13:40:50.021447  708040 kubeadm.go:157] found existing configuration files:
	
	I0908 13:40:50.021531  708040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 13:40:50.039224  708040 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 13:40:50.039375  708040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 13:40:50.048991  708040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 13:40:50.058465  708040 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 13:40:50.058619  708040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 13:40:50.067475  708040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 13:40:50.077583  708040 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 13:40:50.077708  708040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 13:40:50.087652  708040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 13:40:50.097934  708040 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 13:40:50.098062  708040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 13:40:50.107435  708040 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 13:40:50.150391  708040 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 13:40:50.150711  708040 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 13:40:50.170520  708040 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 13:40:50.170605  708040 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0908 13:40:50.170655  708040 kubeadm.go:310] OS: Linux
	I0908 13:40:50.170707  708040 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 13:40:50.170761  708040 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 13:40:50.170813  708040 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 13:40:50.170867  708040 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 13:40:50.170920  708040 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 13:40:50.170993  708040 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 13:40:50.171043  708040 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 13:40:50.171097  708040 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 13:40:50.171147  708040 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 13:40:50.233188  708040 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 13:40:50.233324  708040 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 13:40:50.233417  708040 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 13:40:50.240303  708040 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 13:40:50.246601  708040 out.go:252]   - Generating certificates and keys ...
	I0908 13:40:50.246749  708040 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 13:40:50.246850  708040 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 13:40:50.407408  708040 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 13:40:50.819317  708040 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 13:40:51.240389  708040 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 13:40:52.132979  708040 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 13:40:52.538212  708040 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 13:40:52.538606  708040 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-242829 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 13:40:52.796925  708040 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 13:40:52.797158  708040 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-242829 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 13:40:53.651380  708040 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 13:40:54.482009  708040 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 13:40:55.183766  708040 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 13:40:55.184016  708040 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 13:40:56.024563  708040 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 13:40:56.232938  708040 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 13:40:57.004279  708040 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 13:40:57.261656  708040 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 13:40:58.209207  708040 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 13:40:58.209786  708040 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 13:40:58.212396  708040 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 13:40:58.215879  708040 out.go:252]   - Booting up control plane ...
	I0908 13:40:58.216003  708040 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 13:40:58.216108  708040 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 13:40:58.216185  708040 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 13:40:58.232524  708040 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 13:40:58.232636  708040 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 13:40:58.239256  708040 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 13:40:58.239575  708040 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 13:40:58.239802  708040 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 13:40:58.332462  708040 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 13:40:58.332589  708040 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 13:41:00.346376  708040 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.014225444s
	I0908 13:41:00.350985  708040 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 13:41:00.351332  708040 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0908 13:41:00.351611  708040 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 13:41:00.351698  708040 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 13:41:03.625465  708040 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.273828958s
	I0908 13:41:05.735265  708040 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.384285287s
	I0908 13:41:06.853061  708040 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.502058978s
	I0908 13:41:06.882674  708040 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 13:41:06.902706  708040 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 13:41:06.919276  708040 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 13:41:06.919499  708040 kubeadm.go:310] [mark-control-plane] Marking the node addons-242829 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 13:41:06.936685  708040 kubeadm.go:310] [bootstrap-token] Using token: o1zj4i.q3ail6j6yitj6kgp
	I0908 13:41:06.941644  708040 out.go:252]   - Configuring RBAC rules ...
	I0908 13:41:06.941781  708040 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 13:41:06.950215  708040 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 13:41:06.962471  708040 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 13:41:06.966692  708040 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 13:41:06.970792  708040 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 13:41:06.976907  708040 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 13:41:07.261240  708040 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 13:41:07.721919  708040 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 13:41:08.259405  708040 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 13:41:08.262883  708040 kubeadm.go:310] 
	I0908 13:41:08.262965  708040 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 13:41:08.262975  708040 kubeadm.go:310] 
	I0908 13:41:08.263055  708040 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 13:41:08.263064  708040 kubeadm.go:310] 
	I0908 13:41:08.263109  708040 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 13:41:08.263173  708040 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 13:41:08.263229  708040 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 13:41:08.263237  708040 kubeadm.go:310] 
	I0908 13:41:08.263296  708040 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 13:41:08.263308  708040 kubeadm.go:310] 
	I0908 13:41:08.263357  708040 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 13:41:08.263365  708040 kubeadm.go:310] 
	I0908 13:41:08.263417  708040 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 13:41:08.263499  708040 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 13:41:08.263573  708040 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 13:41:08.263581  708040 kubeadm.go:310] 
	I0908 13:41:08.263667  708040 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 13:41:08.263749  708040 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 13:41:08.263758  708040 kubeadm.go:310] 
	I0908 13:41:08.263843  708040 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o1zj4i.q3ail6j6yitj6kgp \
	I0908 13:41:08.263950  708040 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:29a6290ed8ceaba0b394f06d6a346de6983f25cd3a9793386398697a5378db4a \
	I0908 13:41:08.263974  708040 kubeadm.go:310] 	--control-plane 
	I0908 13:41:08.263982  708040 kubeadm.go:310] 
	I0908 13:41:08.264067  708040 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 13:41:08.264091  708040 kubeadm.go:310] 
	I0908 13:41:08.264176  708040 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o1zj4i.q3ail6j6yitj6kgp \
	I0908 13:41:08.264283  708040 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:29a6290ed8ceaba0b394f06d6a346de6983f25cd3a9793386398697a5378db4a 
	I0908 13:41:08.265834  708040 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 13:41:08.266091  708040 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0908 13:41:08.266214  708040 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 13:41:08.266255  708040 cni.go:84] Creating CNI manager for ""
	I0908 13:41:08.266269  708040 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:41:08.269410  708040 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 13:41:08.272312  708040 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 13:41:08.276100  708040 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 13:41:08.276119  708040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 13:41:08.294511  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 13:41:08.578859  708040 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 13:41:08.578990  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:08.579087  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-242829 minikube.k8s.io/updated_at=2025_09_08T13_41_08_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6 minikube.k8s.io/name=addons-242829 minikube.k8s.io/primary=true
	I0908 13:41:08.753662  708040 ops.go:34] apiserver oom_adj: -16
	I0908 13:41:08.753791  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:09.254412  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:09.754009  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:10.253993  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:10.754645  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:11.254654  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:11.754334  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:12.254184  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:12.753949  708040 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 13:41:12.855806  708040 kubeadm.go:1105] duration metric: took 4.276857237s to wait for elevateKubeSystemPrivileges
	I0908 13:41:12.855839  708040 kubeadm.go:394] duration metric: took 22.904104396s to StartCluster
	I0908 13:41:12.855856  708040 settings.go:142] acquiring lock: {Name:mk8d04bf7fe8294bcf8a246a4f876d89e8c791a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:12.856035  708040 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 13:41:12.856443  708040 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/kubeconfig: {Name:mke605cd08a8cfc644c26dac74a49ac24bb6558b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:41:12.856654  708040 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 13:41:12.856756  708040 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 13:41:12.857011  708040 config.go:182] Loaded profile config "addons-242829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:41:12.857046  708040 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 13:41:12.857132  708040 addons.go:69] Setting yakd=true in profile "addons-242829"
	I0908 13:41:12.857150  708040 addons.go:238] Setting addon yakd=true in "addons-242829"
	I0908 13:41:12.857177  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.857662  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.857961  708040 addons.go:69] Setting inspektor-gadget=true in profile "addons-242829"
	I0908 13:41:12.857986  708040 addons.go:238] Setting addon inspektor-gadget=true in "addons-242829"
	I0908 13:41:12.858018  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.858448  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.858816  708040 addons.go:69] Setting metrics-server=true in profile "addons-242829"
	I0908 13:41:12.858843  708040 addons.go:238] Setting addon metrics-server=true in "addons-242829"
	I0908 13:41:12.858866  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.859315  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.859811  708040 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-242829"
	I0908 13:41:12.859836  708040 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-242829"
	I0908 13:41:12.859859  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.860351  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.863276  708040 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-242829"
	I0908 13:41:12.865917  708040 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-242829"
	I0908 13:41:12.866013  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.866065  708040 addons.go:69] Setting registry=true in profile "addons-242829"
	I0908 13:41:12.866078  708040 addons.go:238] Setting addon registry=true in "addons-242829"
	I0908 13:41:12.866124  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.866619  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.868793  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.863452  708040 addons.go:69] Setting cloud-spanner=true in profile "addons-242829"
	I0908 13:41:12.883987  708040 addons.go:238] Setting addon cloud-spanner=true in "addons-242829"
	I0908 13:41:12.884126  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.884687  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.884937  708040 addons.go:69] Setting registry-creds=true in profile "addons-242829"
	I0908 13:41:12.884955  708040 addons.go:238] Setting addon registry-creds=true in "addons-242829"
	I0908 13:41:12.884979  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.885465  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.863461  708040 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-242829"
	I0908 13:41:12.897592  708040 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-242829"
	I0908 13:41:12.897628  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.898094  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.907587  708040 addons.go:69] Setting storage-provisioner=true in profile "addons-242829"
	I0908 13:41:12.863466  708040 addons.go:69] Setting default-storageclass=true in profile "addons-242829"
	I0908 13:41:12.863470  708040 addons.go:69] Setting gcp-auth=true in profile "addons-242829"
	I0908 13:41:12.863473  708040 addons.go:69] Setting ingress=true in profile "addons-242829"
	I0908 13:41:12.863478  708040 addons.go:69] Setting ingress-dns=true in profile "addons-242829"
	I0908 13:41:12.865847  708040 out.go:179] * Verifying Kubernetes components...
	I0908 13:41:12.917116  708040 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-242829"
	I0908 13:41:12.917134  708040 addons.go:69] Setting volcano=true in profile "addons-242829"
	I0908 13:41:12.917323  708040 addons.go:69] Setting volumesnapshots=true in profile "addons-242829"
	I0908 13:41:12.944632  708040 addons.go:238] Setting addon volumesnapshots=true in "addons-242829"
	I0908 13:41:12.944711  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.945359  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.977098  708040 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-242829"
	I0908 13:41:12.977245  708040 addons.go:238] Setting addon storage-provisioner=true in "addons-242829"
	I0908 13:41:12.979380  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:12.980053  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.985029  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.032887  708040 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 13:41:12.977660  708040 mustload.go:65] Loading cluster: addons-242829
	I0908 13:41:13.033226  708040 config.go:182] Loaded profile config "addons-242829": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:41:13.033527  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.035873  708040 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 13:41:13.035903  708040 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 13:41:13.035976  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:12.977675  708040 addons.go:238] Setting addon ingress=true in "addons-242829"
	I0908 13:41:13.054527  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.055239  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:12.977683  708040 addons.go:238] Setting addon ingress-dns=true in "addons-242829"
	I0908 13:41:13.078285  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.081716  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 13:41:13.085170  708040 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 13:41:13.085189  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 13:41:13.089217  708040 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 13:41:13.089319  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.092352  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.112670  708040 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 13:41:13.112910  708040 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 13:41:13.081764  708040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:41:12.977731  708040 addons.go:238] Setting addon volcano=true in "addons-242829"
	I0908 13:41:13.113541  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.114209  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.125237  708040 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 13:41:13.125263  708040 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 13:41:13.125328  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:12.977722  708040 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-242829"
	I0908 13:41:13.127262  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.127549  708040 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 13:41:13.127884  708040 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 13:41:13.128029  708040 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0908 13:41:13.128036  708040 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0908 13:41:13.152612  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 13:41:13.165716  708040 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 13:41:13.167638  708040 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 13:41:13.167720  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 13:41:13.167854  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.175061  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.176094  708040 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 13:41:13.176360  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 13:41:13.176600  708040 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 13:41:13.176613  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 13:41:13.176683  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.139673  708040 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 13:41:13.179477  708040 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 13:41:13.143828  708040 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 13:41:13.179538  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 13:41:13.179611  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.199977  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.145748  708040 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 13:41:13.216342  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 13:41:13.216412  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.145775  708040 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 13:41:13.241089  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 13:41:13.241167  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.254969  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 13:41:13.260236  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 13:41:13.263108  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 13:41:13.272429  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 13:41:13.274669  708040 addons.go:238] Setting addon default-storageclass=true in "addons-242829"
	I0908 13:41:13.274717  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.275132  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.309127  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 13:41:13.311005  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.320636  708040 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 13:41:13.343605  708040 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 13:41:13.353259  708040 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:41:13.358081  708040 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:41:13.362017  708040 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:41:13.362090  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 13:41:13.362187  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.362268  708040 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 13:41:13.362290  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	W0908 13:41:13.362364  708040 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 13:41:13.365483  708040 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 13:41:13.365510  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 13:41:13.365596  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.396020  708040 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 13:41:13.398928  708040 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 13:41:13.398952  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 13:41:13.399023  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.438514  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 13:41:13.438541  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 13:41:13.438607  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.440624  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.447963  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.448893  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.449592  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.458012  708040 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-242829"
	I0908 13:41:13.458052  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:13.458500  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:13.464271  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.464715  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.465036  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.520367  708040 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 13:41:13.520389  708040 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 13:41:13.520453  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.536823  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.539125  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.545027  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	W0908 13:41:13.549115  708040 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 13:41:13.549149  708040 retry.go:31] will retry after 231.314174ms: ssh: handshake failed: EOF
	I0908 13:41:13.557617  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.573134  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:13.590857  708040 out.go:179]   - Using image docker.io/busybox:stable
	I0908 13:41:13.593684  708040 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 13:41:13.597108  708040 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 13:41:13.597131  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 13:41:13.597200  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:13.633796  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	W0908 13:41:13.635002  708040 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0908 13:41:13.635068  708040 retry.go:31] will retry after 233.24099ms: ssh: handshake failed: EOF
	I0908 13:41:13.730239  708040 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:41:13.774695  708040 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 13:41:13.774759  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 13:41:13.870576  708040 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 13:41:13.870684  708040 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 13:41:13.876325  708040 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:13.876477  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 13:41:13.978367  708040 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 13:41:13.978440  708040 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 13:41:14.016427  708040 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 13:41:14.016504  708040 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 13:41:14.021587  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 13:41:14.032758  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 13:41:14.051013  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:14.055794  708040 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 13:41:14.055821  708040 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 13:41:14.094847  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 13:41:14.094874  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 13:41:14.096400  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 13:41:14.122207  708040 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 13:41:14.122232  708040 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 13:41:14.140543  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 13:41:14.146518  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 13:41:14.156909  708040 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:41:14.156934  708040 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 13:41:14.170124  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 13:41:14.170940  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 13:41:14.173895  708040 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 13:41:14.173919  708040 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 13:41:14.205635  708040 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 13:41:14.205661  708040 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 13:41:14.226116  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 13:41:14.238082  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 13:41:14.238108  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 13:41:14.290603  708040 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 13:41:14.290627  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 13:41:14.357318  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 13:41:14.408835  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 13:41:14.417633  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 13:41:14.417658  708040 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 13:41:14.419396  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 13:41:14.419414  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 13:41:14.423299  708040 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 13:41:14.423324  708040 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 13:41:14.488219  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 13:41:14.571646  708040 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 13:41:14.571671  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 13:41:14.630158  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 13:41:14.630184  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 13:41:14.686802  708040 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:41:14.686827  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 13:41:14.781823  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 13:41:14.842568  708040 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 13:41:14.842595  708040 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 13:41:14.885259  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:41:14.955702  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 13:41:14.955727  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 13:41:15.077649  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 13:41:15.077677  708040 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0908 13:41:15.136580  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 13:41:15.136604  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 13:41:15.287985  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 13:41:15.288010  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0908 13:41:15.491540  708040 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.315411894s)
	I0908 13:41:15.491568  708040 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0908 13:41:15.492533  708040 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.762271015s)
	I0908 13:41:15.493140  708040 node_ready.go:35] waiting up to 6m0s for node "addons-242829" to be "Ready" ...
	I0908 13:41:15.539157  708040 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 13:41:15.539183  708040 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 13:41:15.709395  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 13:41:16.918514  708040 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-242829" context rescaled to 1 replicas
	I0908 13:41:17.320004  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.298377643s)
	W0908 13:41:17.562136  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:17.635648  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.602848064s)
	I0908 13:41:17.916164  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.865114352s)
	W0908 13:41:17.916200  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:17.916217  708040 retry.go:31] will retry after 176.760493ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:17.916259  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.819838596s)
	I0908 13:41:17.916316  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.775751862s)
	I0908 13:41:17.916342  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.769802413s)
	I0908 13:41:17.916369  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.746224542s)
	I0908 13:41:17.916573  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.745611577s)
	I0908 13:41:18.093477  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:19.401450  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.175298853s)
	I0908 13:41:19.401944  708040 addons.go:479] Verifying addon ingress=true in "addons-242829"
	I0908 13:41:19.401662  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.992802354s)
	I0908 13:41:19.401687  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.913444294s)
	I0908 13:41:19.402442  708040 addons.go:479] Verifying addon registry=true in "addons-242829"
	I0908 13:41:19.401725  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.619875472s)
	I0908 13:41:19.401811  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.516522903s)
	W0908 13:41:19.403343  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 13:41:19.403360  708040 retry.go:31] will retry after 344.162618ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 13:41:19.401593  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.044247567s)
	I0908 13:41:19.403382  708040 addons.go:479] Verifying addon metrics-server=true in "addons-242829"
	I0908 13:41:19.405284  708040 out.go:179] * Verifying ingress addon...
	I0908 13:41:19.407186  708040 out.go:179] * Verifying registry addon...
	I0908 13:41:19.407194  708040 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-242829 service yakd-dashboard -n yakd-dashboard
	
	I0908 13:41:19.409994  708040 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 13:41:19.410066  708040 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 13:41:19.429418  708040 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 13:41:19.429439  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:19.429605  708040 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 13:41:19.429612  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:19.689486  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.980029831s)
	I0908 13:41:19.689576  708040 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-242829"
	I0908 13:41:19.689845  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.596336273s)
	W0908 13:41:19.689910  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:19.689946  708040 retry.go:31] will retry after 515.372899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:19.692711  708040 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 13:41:19.696455  708040 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 13:41:19.704002  708040 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 13:41:19.704125  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:19.748277  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 13:41:19.916207  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:19.916435  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:19.996475  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:20.200033  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:20.206363  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:20.415079  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:20.415292  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:20.711673  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:20.920044  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:20.920812  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:21.099876  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:21.099930  708040 retry.go:31] will retry after 311.135348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:21.200420  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:21.411494  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:21.415562  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:21.415733  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:21.700870  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:21.717496  708040 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 13:41:21.717622  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:21.739940  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	I0908 13:41:21.887668  708040 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 13:41:21.908933  708040 addons.go:238] Setting addon gcp-auth=true in "addons-242829"
	I0908 13:41:21.908996  708040 host.go:66] Checking if "addons-242829" exists ...
	I0908 13:41:21.909450  708040 cli_runner.go:164] Run: docker container inspect addons-242829 --format={{.State.Status}}
	I0908 13:41:21.923512  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:21.923581  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:21.934318  708040 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 13:41:21.934371  708040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-242829
	I0908 13:41:21.962232  708040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/addons-242829/id_rsa Username:docker}
	W0908 13:41:22.010891  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:22.201150  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 13:41:22.296909  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:22.296940  708040 retry.go:31] will retry after 1.017925099s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:22.300158  708040 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 13:41:22.303251  708040 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 13:41:22.306025  708040 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 13:41:22.306043  708040 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 13:41:22.324387  708040 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 13:41:22.324467  708040 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 13:41:22.343564  708040 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 13:41:22.343587  708040 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 13:41:22.363178  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 13:41:22.415496  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:22.416486  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:22.702569  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:22.881271  708040 addons.go:479] Verifying addon gcp-auth=true in "addons-242829"
	I0908 13:41:22.884593  708040 out.go:179] * Verifying gcp-auth addon...
	I0908 13:41:22.888126  708040 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 13:41:22.894334  708040 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 13:41:22.894359  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:22.914026  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:22.914313  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:23.200280  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:23.315506  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:23.392231  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:23.414679  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:23.415048  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:23.700340  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:23.900807  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:23.920598  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:23.921288  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:24.142370  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:24.142444  708040 retry.go:31] will retry after 834.215853ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:24.200374  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:24.391473  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:24.413698  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:24.414788  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:24.496489  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:24.699165  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:24.890915  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:24.913881  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:24.914346  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:24.977499  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:25.200372  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:25.391930  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:25.413654  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:25.414211  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:25.700735  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 13:41:25.786460  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:25.786538  708040 retry.go:31] will retry after 1.967547754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:25.891812  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:25.914784  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:25.915633  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:26.200578  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:26.391652  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:26.413824  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:26.414638  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:26.497328  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:26.699680  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:26.892563  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:26.913706  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:26.913945  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:27.200205  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:27.391347  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:27.413856  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:27.414478  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:27.700343  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:27.754653  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:27.892099  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:27.914189  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:27.915097  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:28.200116  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:28.391196  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:28.415182  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:28.415327  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:28.561320  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:28.561377  708040 retry.go:31] will retry after 1.770519939s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:28.700308  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:28.891058  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:28.913510  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:28.913879  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:28.996767  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:29.199581  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:29.391442  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:29.413682  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:29.413803  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:29.700747  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:29.891524  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:29.913807  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:29.913959  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:30.199989  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:30.332392  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:30.391362  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:30.414685  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:30.414784  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:30.704565  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:30.891328  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:30.914411  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:30.916127  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:31.184026  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:31.184058  708040 retry.go:31] will retry after 3.136059055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:31.200222  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:31.391245  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:31.413602  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:31.414065  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:31.495794  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:31.700123  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:31.890950  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:31.914256  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:31.914539  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:32.200568  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:32.391514  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:32.413657  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:32.413869  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:32.699792  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:32.891599  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:32.913516  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:32.913848  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:33.199502  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:33.391394  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:33.413550  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:33.413672  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:33.496333  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:33.702665  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:33.892453  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:33.913843  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:33.914291  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:34.200451  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:34.320628  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:34.392860  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:34.414690  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:34.415301  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:34.700489  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:34.891405  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:34.914270  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:34.914948  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:35.153952  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:35.154032  708040 retry.go:31] will retry after 6.272645383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:35.200220  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:35.390999  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:35.414180  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:35.414198  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:35.497276  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:35.701456  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:35.891540  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:35.913467  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:35.913536  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:36.199794  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:36.391837  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:36.414592  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:36.414658  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:36.699656  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:36.891628  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:36.913833  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:36.914055  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:37.200753  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:37.391555  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:37.413842  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:37.413941  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:37.700001  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:37.892050  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:37.914398  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:37.914479  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:37.996443  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:38.201339  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:38.391548  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:38.413568  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:38.413732  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:38.699724  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:38.891655  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:38.914139  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:38.914340  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:39.200303  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:39.390837  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:39.413824  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:39.413922  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:39.700051  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:39.892200  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:39.913005  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:39.913454  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:40.199768  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:40.392040  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:40.413929  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:40.414613  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:40.498749  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:40.700833  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:40.891661  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:40.914165  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:40.914306  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:41.199813  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:41.391771  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:41.414076  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:41.420172  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:41.427230  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:41:41.700824  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:41.891854  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:41.919928  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:41.920210  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:42.201312  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 13:41:42.293578  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:42.293615  708040 retry.go:31] will retry after 8.145627481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:42.391580  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:42.414014  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:42.414444  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:42.700214  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:42.891546  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:42.913957  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:42.914024  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:42.997097  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:43.200168  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:43.391027  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:43.413206  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:43.413432  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:43.700243  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:43.891203  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:43.913283  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:43.913593  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:44.200399  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:44.391231  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:44.413526  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:44.413648  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:44.700174  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:44.891107  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:44.913247  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:44.914030  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:45.201531  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:45.392114  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:45.414492  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:45.414801  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:45.496764  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:45.699914  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:45.891004  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:45.914432  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:45.914546  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:46.199786  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:46.391795  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:46.414263  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:46.414841  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:46.699842  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:46.891711  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:46.913996  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:46.914323  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:47.199953  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:47.392015  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:47.414200  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:47.414543  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:47.700090  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:47.891105  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:47.913217  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:47.913214  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 13:41:47.996801  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:48.199853  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:48.391787  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:48.414160  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:48.414399  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:48.699631  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:48.891666  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:48.914222  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:48.914282  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:49.199837  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:49.391694  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:49.413503  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:49.414564  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:49.699482  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:49.891523  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:49.913520  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:49.913535  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:50.199630  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:50.392757  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:50.413700  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:50.413845  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:50.440019  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0908 13:41:50.497751  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:50.700673  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:50.891181  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:50.914248  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:50.914776  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:51.200855  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 13:41:51.245912  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:51.245945  708040 retry.go:31] will retry after 9.644713629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:41:51.391633  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:51.414175  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:51.415684  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:51.700275  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:51.891122  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:51.913294  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:51.913401  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:52.199762  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:52.392392  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:52.413654  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:52.413786  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:52.699608  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:52.891857  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:52.915434  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:52.917088  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:52.997035  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:53.200333  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:53.391316  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:53.413634  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:53.413799  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:53.699878  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:53.891935  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:53.914060  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:53.914510  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:54.199897  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:54.391924  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:54.414287  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:54.414443  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:54.700155  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:54.891221  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:54.913598  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:54.913712  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:55.199375  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:55.392506  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:55.413303  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:55.413448  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:41:55.496218  708040 node_ready.go:57] node "addons-242829" has "Ready":"False" status (will retry)
	I0908 13:41:55.700454  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:55.891341  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:55.913539  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:55.913655  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:56.200072  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:56.391036  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:56.414115  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:56.414176  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:56.711373  708040 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 13:41:56.711399  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:56.934983  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:56.938037  708040 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 13:41:56.938064  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:56.949588  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:57.019553  708040 node_ready.go:49] node "addons-242829" is "Ready"
	I0908 13:41:57.019588  708040 node_ready.go:38] duration metric: took 41.526420739s for node "addons-242829" to be "Ready" ...
	I0908 13:41:57.019612  708040 api_server.go:52] waiting for apiserver process to appear ...
	I0908 13:41:57.019691  708040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:41:57.049579  708040 api_server.go:72] duration metric: took 44.192878889s to wait for apiserver process to appear ...
	I0908 13:41:57.049606  708040 api_server.go:88] waiting for apiserver healthz status ...
	I0908 13:41:57.049626  708040 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0908 13:41:57.062697  708040 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0908 13:41:57.068831  708040 api_server.go:141] control plane version: v1.34.0
	I0908 13:41:57.068864  708040 api_server.go:131] duration metric: took 19.250696ms to wait for apiserver health ...
	I0908 13:41:57.068873  708040 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 13:41:57.081569  708040 system_pods.go:59] 19 kube-system pods found
	I0908 13:41:57.081606  708040 system_pods.go:61] "coredns-66bc5c9577-gnj8l" [956ba41e-f66c-4b4b-848a-e870dd479e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:57.081614  708040 system_pods.go:61] "csi-hostpath-attacher-0" [3243f267-5bc4-481f-9ca1-9c6a2d5f2d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:41:57.081641  708040 system_pods.go:61] "csi-hostpath-resizer-0" [f3966cb8-b9a9-48c7-9a21-531ed447f646] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:41:57.081649  708040 system_pods.go:61] "csi-hostpathplugin-d5xdd" [ddd98c4e-7ff5-46c8-b2b9-cbeea00b0f96] Pending
	I0908 13:41:57.081659  708040 system_pods.go:61] "etcd-addons-242829" [e301b1f9-76c3-4c13-b483-f19be0496e6f] Running
	I0908 13:41:57.081664  708040 system_pods.go:61] "kindnet-hkztp" [092d93ec-a85e-4fb2-861f-68fefd75cfee] Running
	I0908 13:41:57.081673  708040 system_pods.go:61] "kube-apiserver-addons-242829" [c0d53905-503c-4452-89a9-ce8988f9cc33] Running
	I0908 13:41:57.081678  708040 system_pods.go:61] "kube-controller-manager-addons-242829" [eb01bf44-55ba-4acf-bbd0-be395ffd2516] Running
	I0908 13:41:57.081685  708040 system_pods.go:61] "kube-ingress-dns-minikube" [e9e939a1-8af2-4645-9e83-2ce6a61144c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:41:57.081695  708040 system_pods.go:61] "kube-proxy-sn2lk" [9671145d-80cd-408f-9b0e-86f834714336] Running
	I0908 13:41:57.081701  708040 system_pods.go:61] "kube-scheduler-addons-242829" [5b19df33-db9b-4e94-a78e-9989e008ce79] Running
	I0908 13:41:57.081714  708040 system_pods.go:61] "metrics-server-85b7d694d7-hhpn4" [72b3cfa9-428a-4c02-add5-147e52588ae7] Pending
	I0908 13:41:57.081723  708040 system_pods.go:61] "nvidia-device-plugin-daemonset-g8fsg" [f115ce75-ec51-411e-a28c-6f0ad7f20cd9] Pending
	I0908 13:41:57.081730  708040 system_pods.go:61] "registry-66898fdd98-rsbjh" [f14de69e-d212-4a48-81f3-6e0b493a2d62] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:41:57.081738  708040 system_pods.go:61] "registry-creds-764b6fb674-sptcc" [e6410ae0-5f50-4418-9999-9f6c35305dd4] Pending
	I0908 13:41:57.081747  708040 system_pods.go:61] "registry-proxy-vctb4" [ae615f1b-8f69-4eb7-9497-285649bc6650] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:41:57.081751  708040 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8dws5" [a724858d-50d9-464b-901e-d23d41f48c5d] Pending
	I0908 13:41:57.081761  708040 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tr6cd" [9041b71c-c126-42b3-a375-c42b09696629] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.081772  708040 system_pods.go:61] "storage-provisioner" [4fd8885b-2e71-4639-9830-2078a88e5493] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:41:57.081789  708040 system_pods.go:74] duration metric: took 12.89997ms to wait for pod list to return data ...
	I0908 13:41:57.081802  708040 default_sa.go:34] waiting for default service account to be created ...
	I0908 13:41:57.085785  708040 default_sa.go:45] found service account: "default"
	I0908 13:41:57.085814  708040 default_sa.go:55] duration metric: took 4.005029ms for default service account to be created ...
	I0908 13:41:57.085834  708040 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 13:41:57.091056  708040 system_pods.go:86] 19 kube-system pods found
	I0908 13:41:57.091092  708040 system_pods.go:89] "coredns-66bc5c9577-gnj8l" [956ba41e-f66c-4b4b-848a-e870dd479e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:57.091102  708040 system_pods.go:89] "csi-hostpath-attacher-0" [3243f267-5bc4-481f-9ca1-9c6a2d5f2d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:41:57.091119  708040 system_pods.go:89] "csi-hostpath-resizer-0" [f3966cb8-b9a9-48c7-9a21-531ed447f646] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:41:57.091124  708040 system_pods.go:89] "csi-hostpathplugin-d5xdd" [ddd98c4e-7ff5-46c8-b2b9-cbeea00b0f96] Pending
	I0908 13:41:57.091130  708040 system_pods.go:89] "etcd-addons-242829" [e301b1f9-76c3-4c13-b483-f19be0496e6f] Running
	I0908 13:41:57.091135  708040 system_pods.go:89] "kindnet-hkztp" [092d93ec-a85e-4fb2-861f-68fefd75cfee] Running
	I0908 13:41:57.091143  708040 system_pods.go:89] "kube-apiserver-addons-242829" [c0d53905-503c-4452-89a9-ce8988f9cc33] Running
	I0908 13:41:57.091147  708040 system_pods.go:89] "kube-controller-manager-addons-242829" [eb01bf44-55ba-4acf-bbd0-be395ffd2516] Running
	I0908 13:41:57.091160  708040 system_pods.go:89] "kube-ingress-dns-minikube" [e9e939a1-8af2-4645-9e83-2ce6a61144c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:41:57.091165  708040 system_pods.go:89] "kube-proxy-sn2lk" [9671145d-80cd-408f-9b0e-86f834714336] Running
	I0908 13:41:57.091170  708040 system_pods.go:89] "kube-scheduler-addons-242829" [5b19df33-db9b-4e94-a78e-9989e008ce79] Running
	I0908 13:41:57.091179  708040 system_pods.go:89] "metrics-server-85b7d694d7-hhpn4" [72b3cfa9-428a-4c02-add5-147e52588ae7] Pending
	I0908 13:41:57.091191  708040 system_pods.go:89] "nvidia-device-plugin-daemonset-g8fsg" [f115ce75-ec51-411e-a28c-6f0ad7f20cd9] Pending
	I0908 13:41:57.091202  708040 system_pods.go:89] "registry-66898fdd98-rsbjh" [f14de69e-d212-4a48-81f3-6e0b493a2d62] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:41:57.091208  708040 system_pods.go:89] "registry-creds-764b6fb674-sptcc" [e6410ae0-5f50-4418-9999-9f6c35305dd4] Pending
	I0908 13:41:57.091225  708040 system_pods.go:89] "registry-proxy-vctb4" [ae615f1b-8f69-4eb7-9497-285649bc6650] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:41:57.091229  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dws5" [a724858d-50d9-464b-901e-d23d41f48c5d] Pending
	I0908 13:41:57.091235  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr6cd" [9041b71c-c126-42b3-a375-c42b09696629] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.091241  708040 system_pods.go:89] "storage-provisioner" [4fd8885b-2e71-4639-9830-2078a88e5493] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:41:57.091268  708040 retry.go:31] will retry after 206.400197ms: missing components: kube-dns
	I0908 13:41:57.225230  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:57.382621  708040 system_pods.go:86] 19 kube-system pods found
	I0908 13:41:57.382672  708040 system_pods.go:89] "coredns-66bc5c9577-gnj8l" [956ba41e-f66c-4b4b-848a-e870dd479e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:57.382688  708040 system_pods.go:89] "csi-hostpath-attacher-0" [3243f267-5bc4-481f-9ca1-9c6a2d5f2d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:41:57.382697  708040 system_pods.go:89] "csi-hostpath-resizer-0" [f3966cb8-b9a9-48c7-9a21-531ed447f646] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:41:57.382709  708040 system_pods.go:89] "csi-hostpathplugin-d5xdd" [ddd98c4e-7ff5-46c8-b2b9-cbeea00b0f96] Pending
	I0908 13:41:57.382718  708040 system_pods.go:89] "etcd-addons-242829" [e301b1f9-76c3-4c13-b483-f19be0496e6f] Running
	I0908 13:41:57.382730  708040 system_pods.go:89] "kindnet-hkztp" [092d93ec-a85e-4fb2-861f-68fefd75cfee] Running
	I0908 13:41:57.382734  708040 system_pods.go:89] "kube-apiserver-addons-242829" [c0d53905-503c-4452-89a9-ce8988f9cc33] Running
	I0908 13:41:57.382740  708040 system_pods.go:89] "kube-controller-manager-addons-242829" [eb01bf44-55ba-4acf-bbd0-be395ffd2516] Running
	I0908 13:41:57.382751  708040 system_pods.go:89] "kube-ingress-dns-minikube" [e9e939a1-8af2-4645-9e83-2ce6a61144c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:41:57.382760  708040 system_pods.go:89] "kube-proxy-sn2lk" [9671145d-80cd-408f-9b0e-86f834714336] Running
	I0908 13:41:57.382765  708040 system_pods.go:89] "kube-scheduler-addons-242829" [5b19df33-db9b-4e94-a78e-9989e008ce79] Running
	I0908 13:41:57.382770  708040 system_pods.go:89] "metrics-server-85b7d694d7-hhpn4" [72b3cfa9-428a-4c02-add5-147e52588ae7] Pending
	I0908 13:41:57.382785  708040 system_pods.go:89] "nvidia-device-plugin-daemonset-g8fsg" [f115ce75-ec51-411e-a28c-6f0ad7f20cd9] Pending
	I0908 13:41:57.382791  708040 system_pods.go:89] "registry-66898fdd98-rsbjh" [f14de69e-d212-4a48-81f3-6e0b493a2d62] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:41:57.382798  708040 system_pods.go:89] "registry-creds-764b6fb674-sptcc" [e6410ae0-5f50-4418-9999-9f6c35305dd4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:41:57.382816  708040 system_pods.go:89] "registry-proxy-vctb4" [ae615f1b-8f69-4eb7-9497-285649bc6650] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:41:57.382829  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dws5" [a724858d-50d9-464b-901e-d23d41f48c5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.382841  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr6cd" [9041b71c-c126-42b3-a375-c42b09696629] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.382851  708040 system_pods.go:89] "storage-provisioner" [4fd8885b-2e71-4639-9830-2078a88e5493] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:41:57.382866  708040 retry.go:31] will retry after 281.461125ms: missing components: kube-dns
	I0908 13:41:57.566825  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:57.566979  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:57.567042  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:57.673634  708040 system_pods.go:86] 19 kube-system pods found
	I0908 13:41:57.673677  708040 system_pods.go:89] "coredns-66bc5c9577-gnj8l" [956ba41e-f66c-4b4b-848a-e870dd479e6e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 13:41:57.673687  708040 system_pods.go:89] "csi-hostpath-attacher-0" [3243f267-5bc4-481f-9ca1-9c6a2d5f2d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:41:57.673694  708040 system_pods.go:89] "csi-hostpath-resizer-0" [f3966cb8-b9a9-48c7-9a21-531ed447f646] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:41:57.673699  708040 system_pods.go:89] "csi-hostpathplugin-d5xdd" [ddd98c4e-7ff5-46c8-b2b9-cbeea00b0f96] Pending
	I0908 13:41:57.673705  708040 system_pods.go:89] "etcd-addons-242829" [e301b1f9-76c3-4c13-b483-f19be0496e6f] Running
	I0908 13:41:57.673709  708040 system_pods.go:89] "kindnet-hkztp" [092d93ec-a85e-4fb2-861f-68fefd75cfee] Running
	I0908 13:41:57.673713  708040 system_pods.go:89] "kube-apiserver-addons-242829" [c0d53905-503c-4452-89a9-ce8988f9cc33] Running
	I0908 13:41:57.673719  708040 system_pods.go:89] "kube-controller-manager-addons-242829" [eb01bf44-55ba-4acf-bbd0-be395ffd2516] Running
	I0908 13:41:57.673729  708040 system_pods.go:89] "kube-ingress-dns-minikube" [e9e939a1-8af2-4645-9e83-2ce6a61144c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:41:57.673734  708040 system_pods.go:89] "kube-proxy-sn2lk" [9671145d-80cd-408f-9b0e-86f834714336] Running
	I0908 13:41:57.673739  708040 system_pods.go:89] "kube-scheduler-addons-242829" [5b19df33-db9b-4e94-a78e-9989e008ce79] Running
	I0908 13:41:57.673751  708040 system_pods.go:89] "metrics-server-85b7d694d7-hhpn4" [72b3cfa9-428a-4c02-add5-147e52588ae7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:41:57.673756  708040 system_pods.go:89] "nvidia-device-plugin-daemonset-g8fsg" [f115ce75-ec51-411e-a28c-6f0ad7f20cd9] Pending
	I0908 13:41:57.673761  708040 system_pods.go:89] "registry-66898fdd98-rsbjh" [f14de69e-d212-4a48-81f3-6e0b493a2d62] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:41:57.673773  708040 system_pods.go:89] "registry-creds-764b6fb674-sptcc" [e6410ae0-5f50-4418-9999-9f6c35305dd4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:41:57.673779  708040 system_pods.go:89] "registry-proxy-vctb4" [ae615f1b-8f69-4eb7-9497-285649bc6650] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:41:57.673787  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dws5" [a724858d-50d9-464b-901e-d23d41f48c5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.673800  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr6cd" [9041b71c-c126-42b3-a375-c42b09696629] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:57.673805  708040 system_pods.go:89] "storage-provisioner" [4fd8885b-2e71-4639-9830-2078a88e5493] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 13:41:57.673851  708040 retry.go:31] will retry after 355.091475ms: missing components: kube-dns
	I0908 13:41:57.728459  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:57.892378  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:57.914406  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:57.914736  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:58.034165  708040 system_pods.go:86] 19 kube-system pods found
	I0908 13:41:58.034205  708040 system_pods.go:89] "coredns-66bc5c9577-gnj8l" [956ba41e-f66c-4b4b-848a-e870dd479e6e] Running
	I0908 13:41:58.034216  708040 system_pods.go:89] "csi-hostpath-attacher-0" [3243f267-5bc4-481f-9ca1-9c6a2d5f2d4a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 13:41:58.034223  708040 system_pods.go:89] "csi-hostpath-resizer-0" [f3966cb8-b9a9-48c7-9a21-531ed447f646] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 13:41:58.034231  708040 system_pods.go:89] "csi-hostpathplugin-d5xdd" [ddd98c4e-7ff5-46c8-b2b9-cbeea00b0f96] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 13:41:58.034236  708040 system_pods.go:89] "etcd-addons-242829" [e301b1f9-76c3-4c13-b483-f19be0496e6f] Running
	I0908 13:41:58.034242  708040 system_pods.go:89] "kindnet-hkztp" [092d93ec-a85e-4fb2-861f-68fefd75cfee] Running
	I0908 13:41:58.034246  708040 system_pods.go:89] "kube-apiserver-addons-242829" [c0d53905-503c-4452-89a9-ce8988f9cc33] Running
	I0908 13:41:58.034250  708040 system_pods.go:89] "kube-controller-manager-addons-242829" [eb01bf44-55ba-4acf-bbd0-be395ffd2516] Running
	I0908 13:41:58.034257  708040 system_pods.go:89] "kube-ingress-dns-minikube" [e9e939a1-8af2-4645-9e83-2ce6a61144c7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 13:41:58.034261  708040 system_pods.go:89] "kube-proxy-sn2lk" [9671145d-80cd-408f-9b0e-86f834714336] Running
	I0908 13:41:58.034267  708040 system_pods.go:89] "kube-scheduler-addons-242829" [5b19df33-db9b-4e94-a78e-9989e008ce79] Running
	I0908 13:41:58.034273  708040 system_pods.go:89] "metrics-server-85b7d694d7-hhpn4" [72b3cfa9-428a-4c02-add5-147e52588ae7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 13:41:58.034285  708040 system_pods.go:89] "nvidia-device-plugin-daemonset-g8fsg" [f115ce75-ec51-411e-a28c-6f0ad7f20cd9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 13:41:58.034302  708040 system_pods.go:89] "registry-66898fdd98-rsbjh" [f14de69e-d212-4a48-81f3-6e0b493a2d62] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 13:41:58.034308  708040 system_pods.go:89] "registry-creds-764b6fb674-sptcc" [e6410ae0-5f50-4418-9999-9f6c35305dd4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 13:41:58.034314  708040 system_pods.go:89] "registry-proxy-vctb4" [ae615f1b-8f69-4eb7-9497-285649bc6650] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 13:41:58.034323  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dws5" [a724858d-50d9-464b-901e-d23d41f48c5d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:58.034330  708040 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr6cd" [9041b71c-c126-42b3-a375-c42b09696629] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 13:41:58.034337  708040 system_pods.go:89] "storage-provisioner" [4fd8885b-2e71-4639-9830-2078a88e5493] Running
	I0908 13:41:58.034346  708040 system_pods.go:126] duration metric: took 948.503808ms to wait for k8s-apps to be running ...
	I0908 13:41:58.034353  708040 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 13:41:58.034413  708040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:41:58.046675  708040 system_svc.go:56] duration metric: took 12.311351ms WaitForService to wait for kubelet
	I0908 13:41:58.046704  708040 kubeadm.go:578] duration metric: took 45.190021561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:41:58.046723  708040 node_conditions.go:102] verifying NodePressure condition ...
	I0908 13:41:58.050746  708040 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 13:41:58.050780  708040 node_conditions.go:123] node cpu capacity is 2
	I0908 13:41:58.050794  708040 node_conditions.go:105] duration metric: took 4.065304ms to run NodePressure ...
	I0908 13:41:58.050807  708040 start.go:241] waiting for startup goroutines ...
	I0908 13:41:58.199529  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:58.392033  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:58.492573  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:58.492572  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:58.699695  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:58.892010  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:58.913582  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:58.913660  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:59.202061  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:59.391992  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:59.414658  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:41:59.415011  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:59.700822  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:41:59.893153  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:41:59.914943  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:41:59.915255  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:00.203583  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:00.395107  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:00.416597  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:00.422923  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:00.702077  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:00.891772  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:42:00.894942  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:00.919178  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:00.919549  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:01.200098  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:01.390899  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:01.414991  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:01.415950  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:01.708233  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:01.892047  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:01.914926  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:01.915117  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:02.057004  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.165196006s)
	W0908 13:42:02.057044  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:42:02.057065  708040 retry.go:31] will retry after 13.278193176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:42:02.201288  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:02.391743  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:02.415314  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:02.415517  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:02.700021  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:02.892200  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:02.913938  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:02.914055  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:03.200870  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:03.391020  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:03.413893  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:03.414049  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:03.704273  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:03.911452  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:03.939772  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:03.951983  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:04.200704  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:04.392476  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:04.416675  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:04.418078  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:04.708637  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:04.892806  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:04.916241  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:04.916675  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:05.200541  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:05.392439  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:05.417597  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:05.419148  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:05.700913  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:05.901002  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:05.925950  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:05.926359  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:06.200044  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:06.392128  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:06.414738  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:06.414818  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:06.704157  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:06.891942  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:06.936246  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:06.936383  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:07.199387  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:07.391732  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:07.415433  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:07.415519  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:07.700478  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:07.904197  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:07.915680  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:08.005718  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:08.199743  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:08.391527  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:08.414516  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:08.414657  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:08.700834  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:08.891999  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:08.913876  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:08.914421  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:09.201248  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:09.402491  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:09.502944  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:09.503470  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:09.699929  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:09.899606  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:09.927376  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:09.927904  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:10.200596  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:10.392667  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:10.416487  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:10.418102  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:10.705038  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:10.891719  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:10.917777  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:10.918158  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:11.200737  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:11.392926  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:11.414664  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:11.415437  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:11.701301  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:11.892200  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:11.917313  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:11.925334  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:12.201209  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:12.391822  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:12.417373  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:12.417850  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:12.703937  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:12.891911  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:12.918853  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:12.918971  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:13.200511  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:13.391860  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:13.415644  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:13.415804  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:13.700005  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:13.894852  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:13.920852  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:13.921548  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:14.200826  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:14.392721  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:14.415285  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:14.415634  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:14.699879  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:14.892183  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:14.915455  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:14.915721  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:15.209597  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:15.336265  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:42:15.391878  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:15.416328  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:15.416458  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:15.705682  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:15.891947  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:15.922592  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:15.922729  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:16.203768  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:16.405506  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:16.420961  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:16.421351  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:16.437458  708040 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.101153275s)
	W0908 13:42:16.437498  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:42:16.437516  708040 retry.go:31] will retry after 46.893704239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 13:42:16.702477  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:16.894974  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:16.916846  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:16.917240  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:17.203294  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:17.392105  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:17.414027  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:17.414341  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:17.702085  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:17.895313  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:17.917126  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:17.917638  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:18.200715  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:18.394735  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:18.418049  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:18.418161  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:18.725785  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:18.892066  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:18.915212  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:18.915343  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:19.200012  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:19.391319  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:19.414957  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:19.415101  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:19.700790  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:19.892192  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:19.915034  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:19.915164  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:20.200616  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:20.391950  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:20.417046  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:20.417168  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:20.702794  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:20.894109  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:20.913812  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:20.914256  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:21.200181  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:21.391295  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:21.415065  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:21.416435  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:21.699773  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:21.891910  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:21.950868  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:21.952350  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:22.202275  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:22.391663  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:22.417138  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:22.417592  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:22.702111  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:22.891612  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:22.916360  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:22.916849  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:23.200210  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:23.391942  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:23.418637  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:23.418828  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:23.708945  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:23.892107  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:23.915071  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:23.915415  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:24.201010  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:24.391471  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:24.415964  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:24.416401  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:24.703181  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:24.892030  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:24.929112  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:24.932658  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:25.203165  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:25.391569  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:25.416386  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:25.416603  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:25.700342  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:25.892909  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:25.916273  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:25.916633  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:26.201912  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:26.397695  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:26.415829  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:26.416224  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:26.701053  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:26.898778  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:26.931277  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:26.931678  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:27.203555  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:27.392291  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:27.415043  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:27.416406  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:27.703185  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:27.891919  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:27.914153  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:27.914383  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:28.199856  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:28.392399  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:28.414108  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:28.414409  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:28.701720  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:28.891673  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:28.916785  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:28.917244  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:29.200064  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:29.392525  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:29.415936  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:29.416451  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:29.699779  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:29.891337  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:29.913420  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:29.913840  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:30.200731  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:30.391498  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:30.414937  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:30.415064  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:30.700714  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:30.892927  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:30.915982  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:30.916730  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:31.199687  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:31.392193  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:31.414979  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:31.416500  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:31.701564  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:31.891028  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:31.914094  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:31.914181  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:32.200584  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:32.391262  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:32.413807  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:32.414761  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:32.699960  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:32.890927  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:32.913071  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:32.913328  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:33.200603  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:33.391987  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:33.414312  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:33.414561  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:33.699776  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:33.891906  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:33.914481  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:33.914772  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:34.201052  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:34.392262  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:34.415850  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:34.416274  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:34.700643  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:34.891831  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:34.915387  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:34.915655  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:35.200468  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:35.392164  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:35.414092  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:35.415188  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:35.700572  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:35.891808  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:35.916037  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:35.916404  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:36.199814  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:36.392620  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:36.415501  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:36.415875  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:36.705161  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:36.906264  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:36.920416  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:36.920785  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:37.202391  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:37.391728  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:37.414885  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:37.415009  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:37.701029  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:37.891477  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:37.915580  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:37.921900  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:38.200907  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:38.393091  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:38.415535  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:38.416116  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:38.701379  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:38.894374  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:38.921424  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:38.921748  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:39.200396  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:39.392309  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:39.426132  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:39.426772  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:39.705841  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:39.891924  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:39.916625  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:39.917844  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:40.202588  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:40.392206  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:40.416275  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:40.416772  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:40.718333  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:40.892901  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:40.917001  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:40.919710  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:41.201121  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:41.391412  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:41.413128  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:41.413679  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:41.700536  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:41.893881  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:41.919616  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:41.920805  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:42.207641  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:42.392775  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:42.415942  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:42.416217  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:42.701381  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:42.891537  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:42.916114  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:42.916274  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:43.203450  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:43.392649  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:43.415899  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:43.417881  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:43.700500  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:43.891954  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:43.913904  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:43.914686  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:44.200037  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:44.392428  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:44.414058  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:44.414205  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:44.701191  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:44.891366  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:44.914673  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:44.915174  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:45.211385  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:45.391890  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:45.415605  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:45.416202  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:45.700879  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:45.891398  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:45.913487  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:45.914138  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:46.201370  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:46.393781  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:46.414378  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:46.415167  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:46.708882  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:46.891913  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:46.919176  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:46.919314  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:47.200847  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:47.392495  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:47.415605  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:47.416670  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:47.701988  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:47.893607  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:47.916062  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:47.916493  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:48.200024  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:48.393365  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:48.494033  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:48.494432  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:48.700990  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:48.891205  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:48.913924  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:48.913974  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:49.201352  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:49.391165  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:49.413892  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 13:42:49.414319  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:49.699974  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:49.891206  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:49.913415  708040 kapi.go:107] duration metric: took 1m30.503424251s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 13:42:49.913771  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:50.201701  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:50.392115  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:50.413272  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:50.700431  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:50.891054  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:50.914047  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:51.200252  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:51.391821  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:51.414127  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:51.700432  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:51.891305  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:51.913387  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:52.199895  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:52.392334  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:52.413358  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:52.702550  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:52.892065  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:52.913299  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:53.199654  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:53.391779  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:53.413998  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:53.700250  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:53.890775  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:53.913816  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:54.199932  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:54.391273  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:54.413039  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:54.700347  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:54.891254  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:54.913304  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:55.199475  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:55.391610  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:55.413532  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:55.700171  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:55.890999  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:55.913141  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:56.200177  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:56.391547  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:56.413528  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:56.700265  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:56.891921  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:56.913897  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:57.200280  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:57.392110  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:57.413874  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:57.699812  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:57.891615  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:57.913419  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:58.199473  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:58.392019  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:58.413787  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:58.699991  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:58.890942  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:58.914010  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:59.200117  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:59.391503  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:59.413512  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:42:59.700005  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:42:59.890963  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:42:59.913087  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:00.208476  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:00.393701  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:00.414355  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:00.700071  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:00.891211  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:00.914686  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:01.200551  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:01.392243  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:01.413413  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:01.700338  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:01.891246  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:01.913154  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:02.200313  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:02.391080  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:02.414088  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:02.699678  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:02.891310  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:02.913425  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:03.200952  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:03.332230  708040 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 13:43:03.391613  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:03.415063  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:03.701506  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:03.892585  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:03.914121  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 13:43:04.177001  708040 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 13:43:04.177107  708040 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 13:43:04.199889  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:04.392409  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:04.413633  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:04.700061  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:04.891196  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:04.913078  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:05.200510  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:05.391889  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:05.413959  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:05.700719  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:05.891708  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:05.913749  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:06.199566  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:06.391930  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:06.414009  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:06.700787  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:06.891766  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:06.913788  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:07.200138  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:07.391378  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:07.414123  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:07.700521  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:07.891667  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:07.913795  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:08.200271  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:08.392522  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:08.413949  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:08.700268  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:08.890981  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:08.916396  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:09.200019  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:09.391071  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:09.413985  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:09.700852  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:09.897461  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:09.913593  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:10.200434  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:10.392683  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:10.417370  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:10.708550  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:10.892960  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:10.915962  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:11.200552  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:11.391586  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:11.414039  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:11.700585  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:11.892126  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:11.913631  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:12.203273  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:12.391140  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:12.414913  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:12.700619  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:12.897534  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:12.913904  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:13.200303  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:13.391458  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:13.413511  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:13.703099  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:13.891269  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:13.913228  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:14.199716  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:14.396647  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:14.493268  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:14.701683  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:14.892415  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:14.915121  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:15.200726  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:15.393225  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:15.416842  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:15.703215  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:15.893632  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:15.914839  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:16.201247  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:16.391968  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:16.413081  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:16.701902  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:16.892358  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:16.913494  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:17.201206  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:17.391913  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:17.414124  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:17.701648  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:17.891970  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:17.913790  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:18.200204  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:18.392444  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:18.414315  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:18.701556  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:18.893288  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:18.914479  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:19.200197  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:19.391302  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:19.413902  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:19.700646  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:19.892919  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:19.914243  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:20.201524  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:20.392154  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 13:43:20.414313  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:20.709745  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:20.891351  708040 kapi.go:107] duration metric: took 1m58.003225632s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 13:43:20.894429  708040 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-242829 cluster.
	I0908 13:43:20.897459  708040 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 13:43:20.900396  708040 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 13:43:20.913059  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:21.201554  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:21.414609  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:21.704785  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:21.916471  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:22.200628  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:22.423967  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:22.728636  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:22.920898  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:23.199924  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:23.413333  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:23.700443  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:23.913832  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:24.204918  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:24.414042  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:24.701469  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:24.913841  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:25.200380  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:25.413926  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:25.702936  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:25.916761  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:26.200246  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:26.414306  708040 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 13:43:26.700721  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:26.917694  708040 kapi.go:107] duration metric: took 2m7.507624242s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 13:43:27.205486  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:27.735420  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:28.200558  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:28.702172  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:29.235238  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:29.702067  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:30.200421  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:30.699845  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:31.202478  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:31.699646  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:32.203036  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:32.700330  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:33.201044  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:33.699719  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:34.200974  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:34.700713  708040 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 13:43:35.201629  708040 kapi.go:107] duration metric: took 2m15.505172409s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 13:43:35.207128  708040 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, registry-creds, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0908 13:43:35.210029  708040 addons.go:514] duration metric: took 2m22.352955183s for enable addons: enabled=[cloud-spanner ingress-dns registry-creds storage-provisioner amd-gpu-device-plugin nvidia-device-plugin default-storageclass metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0908 13:43:35.210091  708040 start.go:246] waiting for cluster config update ...
	I0908 13:43:35.210117  708040 start.go:255] writing updated cluster config ...
	I0908 13:43:35.210436  708040 ssh_runner.go:195] Run: rm -f paused
	I0908 13:43:35.214047  708040 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:43:35.217576  708040 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gnj8l" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.223440  708040 pod_ready.go:94] pod "coredns-66bc5c9577-gnj8l" is "Ready"
	I0908 13:43:35.223470  708040 pod_ready.go:86] duration metric: took 5.863456ms for pod "coredns-66bc5c9577-gnj8l" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.225882  708040 pod_ready.go:83] waiting for pod "etcd-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.230367  708040 pod_ready.go:94] pod "etcd-addons-242829" is "Ready"
	I0908 13:43:35.230456  708040 pod_ready.go:86] duration metric: took 4.547931ms for pod "etcd-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.232968  708040 pod_ready.go:83] waiting for pod "kube-apiserver-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.237856  708040 pod_ready.go:94] pod "kube-apiserver-addons-242829" is "Ready"
	I0908 13:43:35.237884  708040 pod_ready.go:86] duration metric: took 4.890221ms for pod "kube-apiserver-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.240402  708040 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.617819  708040 pod_ready.go:94] pod "kube-controller-manager-addons-242829" is "Ready"
	I0908 13:43:35.617863  708040 pod_ready.go:86] duration metric: took 377.426882ms for pod "kube-controller-manager-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:35.818155  708040 pod_ready.go:83] waiting for pod "kube-proxy-sn2lk" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:36.218628  708040 pod_ready.go:94] pod "kube-proxy-sn2lk" is "Ready"
	I0908 13:43:36.218660  708040 pod_ready.go:86] duration metric: took 400.478423ms for pod "kube-proxy-sn2lk" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:36.418841  708040 pod_ready.go:83] waiting for pod "kube-scheduler-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:36.818150  708040 pod_ready.go:94] pod "kube-scheduler-addons-242829" is "Ready"
	I0908 13:43:36.818180  708040 pod_ready.go:86] duration metric: took 399.309034ms for pod "kube-scheduler-addons-242829" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 13:43:36.818194  708040 pod_ready.go:40] duration metric: took 1.604108577s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 13:43:36.873448  708040 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 13:43:36.876810  708040 out.go:179] * Done! kubectl is now configured to use "addons-242829" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 13:47:11 addons-242829 crio[983]: time="2025-09-08 13:47:11.076363721Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 08 13:48:11 addons-242829 crio[983]: time="2025-09-08 13:48:11.629037133Z" level=info msg="Pulling image: docker.io/nginx:latest" id=44f027cb-dcaa-4306-9629-22bcd67cb332 name=/runtime.v1.ImageService/PullImage
	Sep 08 13:48:11 addons-242829 crio[983]: time="2025-09-08 13:48:11.631521643Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 08 13:48:23 addons-242829 crio[983]: time="2025-09-08 13:48:23.597204938Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=36108711-a70c-4f69-ba93-100b2cc8013e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:48:23 addons-242829 crio[983]: time="2025-09-08 13:48:23.597431707Z" level=info msg="Image docker.io/nginx:alpine not found" id=36108711-a70c-4f69-ba93-100b2cc8013e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:48:35 addons-242829 crio[983]: time="2025-09-08 13:48:35.596826880Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=1bb95682-0c04-4365-8bbf-ac85d965a53d name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:48:35 addons-242829 crio[983]: time="2025-09-08 13:48:35.597063741Z" level=info msg="Image docker.io/nginx:alpine not found" id=1bb95682-0c04-4365-8bbf-ac85d965a53d name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:48:41 addons-242829 crio[983]: time="2025-09-08 13:48:41.882393647Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=1c220283-f189-47f5-bd04-6da86e04cbaf name=/runtime.v1.ImageService/PullImage
	Sep 08 13:48:41 addons-242829 crio[983]: time="2025-09-08 13:48:41.884395959Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 08 13:49:26 addons-242829 crio[983]: time="2025-09-08 13:49:26.596195829Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=90728a7c-ac4a-423c-a55c-674b6ba4ca94 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:49:26 addons-242829 crio[983]: time="2025-09-08 13:49:26.596424403Z" level=info msg="Image docker.io/nginx:alpine not found" id=90728a7c-ac4a-423c-a55c-674b6ba4ca94 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:49:36 addons-242829 crio[983]: time="2025-09-08 13:49:36.595981905Z" level=info msg="Pulling image: docker.io/nginx:latest" id=2b8085ef-a984-4836-b53b-23ff945b4808 name=/runtime.v1.ImageService/PullImage
	Sep 08 13:49:36 addons-242829 crio[983]: time="2025-09-08 13:49:36.599213691Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 08 13:49:41 addons-242829 crio[983]: time="2025-09-08 13:49:41.597169302Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=9bbd3c53-5e97-499f-921f-0c8b805ef0d5 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:49:41 addons-242829 crio[983]: time="2025-09-08 13:49:41.597415960Z" level=info msg="Image docker.io/nginx:alpine not found" id=9bbd3c53-5e97-499f-921f-0c8b805ef0d5 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:49:54 addons-242829 crio[983]: time="2025-09-08 13:49:54.595895998Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=f3e6f8ef-5b51-493e-acc6-9bdaed1464d6 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:49:54 addons-242829 crio[983]: time="2025-09-08 13:49:54.596148375Z" level=info msg="Image docker.io/nginx:alpine not found" id=f3e6f8ef-5b51-493e-acc6-9bdaed1464d6 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:50:06 addons-242829 crio[983]: time="2025-09-08 13:50:06.986246285Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=42d73d9d-0ba6-4f2b-b459-170b131e6fc8 name=/runtime.v1.ImageService/PullImage
	Sep 08 13:50:06 addons-242829 crio[983]: time="2025-09-08 13:50:06.987512667Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 08 13:50:50 addons-242829 crio[983]: time="2025-09-08 13:50:50.595943605Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=e5b7e4f8-1228-4ee8-89cb-b51257072214 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:50:50 addons-242829 crio[983]: time="2025-09-08 13:50:50.596236229Z" level=info msg="Image docker.io/nginx:alpine not found" id=e5b7e4f8-1228-4ee8-89cb-b51257072214 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:51:02 addons-242829 crio[983]: time="2025-09-08 13:51:02.596578637Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=19428b41-b786-4fb0-b5af-c2cda63c3bac name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:51:02 addons-242829 crio[983]: time="2025-09-08 13:51:02.596801672Z" level=info msg="Image docker.io/nginx:alpine not found" id=19428b41-b786-4fb0-b5af-c2cda63c3bac name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:51:14 addons-242829 crio[983]: time="2025-09-08 13:51:14.596111873Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=72216afa-d3e1-4b61-8738-0b8adc633b58 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 13:51:14 addons-242829 crio[983]: time="2025-09-08 13:51:14.596333719Z" level=info msg="Image docker.io/nginx:alpine not found" id=72216afa-d3e1-4b61-8738-0b8adc633b58 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	a30f95fe1632f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          7 minutes ago       Running             busybox                                  0                   a2374c9814eda       busybox
	2584dfdafdc55       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   27dd9d90ed5b0       csi-hostpathplugin-d5xdd
	2bafe0dd5f7b5       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          7 minutes ago       Running             csi-provisioner                          0                   27dd9d90ed5b0       csi-hostpathplugin-d5xdd
	98d36255b3935       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            7 minutes ago       Running             liveness-probe                           0                   27dd9d90ed5b0       csi-hostpathplugin-d5xdd
	6ab0f2d5eb10b       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           7 minutes ago       Running             hostpath                                 0                   27dd9d90ed5b0       csi-hostpathplugin-d5xdd
	c0e2fa2396b15       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef                             8 minutes ago       Running             controller                               0                   3acaa6eaaebe1       ingress-nginx-controller-9cc49f96f-6crd5
	2fb4482f6cd02       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                8 minutes ago       Running             node-driver-registrar                    0                   27dd9d90ed5b0       csi-hostpathplugin-d5xdd
	59980ed3186f7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:b3f8a40cecf84afd8a5299442eab04c52f913ef6194e01dc4fbeb783f9d42c58                            8 minutes ago       Running             gadget                                   0                   2755b3771e4ac       gadget-kldqv
	91177dbb04658       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                                             8 minutes ago       Exited              patch                                    3                   64b95056d642c       ingress-nginx-admission-patch-9t8cq
	e6beed061bfe6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24                   8 minutes ago       Exited              create                                   0                   c88791b648b02       ingress-nginx-admission-create-4f2x7
	e418de0f8f874       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              8 minutes ago       Running             csi-resizer                              0                   ab24a99177fcb       csi-hostpath-resizer-0
	779acd071dd8e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   8 minutes ago       Running             csi-external-health-monitor-controller   0                   27dd9d90ed5b0       csi-hostpathplugin-d5xdd
	e4c12d80a2f80       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               9 minutes ago       Running             minikube-ingress-dns                     0                   56252405e86a5       kube-ingress-dns-minikube
	c8385dc28e320       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             9 minutes ago       Running             csi-attacher                             0                   ad691b897a11d       csi-hostpath-attacher-0
	f0337ecd51e33       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      9 minutes ago       Running             volume-snapshot-controller               0                   5f9bc92a09b16       snapshot-controller-7d9fbc56b8-8dws5
	dd62f63fcdf04       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      9 minutes ago       Running             volume-snapshot-controller               0                   dda9283d651fa       snapshot-controller-7d9fbc56b8-tr6cd
	fc30762357fd6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             9 minutes ago       Running             storage-provisioner                      0                   ffc810ffc2458       storage-provisioner
	5f02537abeb2e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             9 minutes ago       Running             coredns                                  0                   a5c6442ae7d3b       coredns-66bc5c9577-gnj8l
	8a649dce93b09       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             10 minutes ago      Running             kindnet-cni                              0                   ac2934957e247       kindnet-hkztp
	f8d8b89978600       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                                             10 minutes ago      Running             kube-proxy                               0                   061e17e5b84dc       kube-proxy-sn2lk
	02451464eed55       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                                                             10 minutes ago      Running             kube-apiserver                           0                   79b0ac473e4af       kube-apiserver-addons-242829
	be8406ca5c1ed       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                                             10 minutes ago      Running             kube-scheduler                           0                   f69d79fcbdf4d       kube-scheduler-addons-242829
	4c1bb6d039fd7       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                                             10 minutes ago      Running             kube-controller-manager                  0                   6b16ee517ae64       kube-controller-manager-addons-242829
	04c1e12420631       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             10 minutes ago      Running             etcd                                     0                   a184ebc732186       etcd-addons-242829
	
	
	==> coredns [5f02537abeb2ea5962133ab212d003231638185d7c37aac98e0dca0d6cffaf4a] <==
	[INFO] 10.244.0.16:36903 - 18941 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002481748s
	[INFO] 10.244.0.16:36903 - 60209 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000272317s
	[INFO] 10.244.0.16:36903 - 14947 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000234769s
	[INFO] 10.244.0.16:35234 - 9101 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000207471s
	[INFO] 10.244.0.16:35234 - 9331 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.002899256s
	[INFO] 10.244.0.16:50633 - 10263 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00013606s
	[INFO] 10.244.0.16:50633 - 10740 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000130669s
	[INFO] 10.244.0.16:48023 - 56314 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126213s
	[INFO] 10.244.0.16:48023 - 56503 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000150435s
	[INFO] 10.244.0.16:39062 - 26032 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001682981s
	[INFO] 10.244.0.16:39062 - 25844 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001749042s
	[INFO] 10.244.0.16:34404 - 25315 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137184s
	[INFO] 10.244.0.16:34404 - 25487 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000150706s
	[INFO] 10.244.0.21:60704 - 24519 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000196541s
	[INFO] 10.244.0.21:41264 - 15606 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000088214s
	[INFO] 10.244.0.21:59090 - 31906 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126689s
	[INFO] 10.244.0.21:40895 - 42869 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081051s
	[INFO] 10.244.0.21:58684 - 59770 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000180975s
	[INFO] 10.244.0.21:43504 - 19377 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095436s
	[INFO] 10.244.0.21:40246 - 50157 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002048994s
	[INFO] 10.244.0.21:50913 - 18304 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002595045s
	[INFO] 10.244.0.21:49429 - 26048 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001926038s
	[INFO] 10.244.0.21:37250 - 24804 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002155621s
	[INFO] 10.244.0.24:35998 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000195925s
	[INFO] 10.244.0.24:51410 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130923s
	
	
	==> describe nodes <==
	Name:               addons-242829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-242829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6
	                    minikube.k8s.io/name=addons-242829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_41_08_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-242829
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-242829"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:41:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-242829
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 13:51:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 13:51:00 +0000   Mon, 08 Sep 2025 13:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 13:51:00 +0000   Mon, 08 Sep 2025 13:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 13:51:00 +0000   Mon, 08 Sep 2025 13:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 13:51:00 +0000   Mon, 08 Sep 2025 13:41:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-242829
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9a623c9b5844688a0a430e9b7ac547a
	  System UUID:                c25e32cb-1ec5-4151-a0b3-c6ef8f1dc70a
	  Boot ID:                    3b69f852-7505-47f7-82de-581d66319e23
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m50s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-kldqv                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-6crd5    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-gnj8l                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-d5xdd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m31s
	  kube-system                 etcd-addons-242829                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-hkztp                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-addons-242829                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-242829       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-sn2lk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-242829                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 registry-creds-764b6fb674-sptcc             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-8dws5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-7d9fbc56b8-tr6cd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 10m    kube-proxy       
	  Normal   Starting                 10m    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m    kubelet          Node addons-242829 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m    kubelet          Node addons-242829 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m    kubelet          Node addons-242829 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m    node-controller  Node addons-242829 event: Registered Node addons-242829 in Controller
	  Normal   NodeReady                9m31s  kubelet          Node addons-242829 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 8 12:35] FS-Cache: Duplicate cookie detected
	[  +0.000684] FS-Cache: O-cookie c=00000025 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000907] FS-Cache: O-cookie d=00000000f75621f8{9P.session} n=000000002e0501ee
	[  +0.001029] FS-Cache: O-key=[10] '34323936393639353436'
	[  +0.000727] FS-Cache: N-cookie c=00000026 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000883] FS-Cache: N-cookie d=00000000f75621f8{9P.session} n=00000000ccfa13d2
	[  +0.001067] FS-Cache: N-key=[10] '34323936393639353436'
	[Sep 8 13:39] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [04c1e124206317e3618b751b13c0f3162ddfe05b2ce4e95d253df73faf85f04e] <==
	{"level":"info","ts":"2025-09-08T13:41:16.596691Z","caller":"traceutil/trace.go:172","msg":"trace[1413570816] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"132.7297ms","start":"2025-09-08T13:41:16.463946Z","end":"2025-09-08T13:41:16.596675Z","steps":["trace[1413570816] 'process raft request'  (duration: 132.553091ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:41:16.598886Z","caller":"traceutil/trace.go:172","msg":"trace[1257846946] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"138.285618ms","start":"2025-09-08T13:41:16.460565Z","end":"2025-09-08T13:41:16.598851Z","steps":["trace[1257846946] 'process raft request'  (duration: 135.584755ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:41:18.247893Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.134096ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128039831268646847 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/gadget/gadget\" mod_revision:555 > success:<request_put:<key:\"/registry/daemonsets/gadget/gadget\" value_size:10012 >> failure:<request_range:<key:\"/registry/daemonsets/gadget/gadget\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-08T13:41:18.249339Z","caller":"traceutil/trace.go:172","msg":"trace[792231134] linearizableReadLoop","detail":"{readStateIndex:589; appliedIndex:588; }","duration":"152.437683ms","start":"2025-09-08T13:41:18.095808Z","end":"2025-09-08T13:41:18.248246Z","steps":["trace[792231134] 'read index received'  (duration: 8.356252ms)","trace[792231134] 'applied index is now lower than readState.Index'  (duration: 144.015346ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T13:41:18.249583Z","caller":"traceutil/trace.go:172","msg":"trace[450647026] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"161.628958ms","start":"2025-09-08T13:41:18.087939Z","end":"2025-09-08T13:41:18.249568Z","steps":["trace[450647026] 'compare'  (duration: 84.888312ms)","trace[450647026] 'store kv pair into bolt db' {req_type:put; key:/registry/daemonsets/gadget/gadget; req_size:10051; } (duration: 51.155691ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T13:41:18.272116Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.285211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/ingress-nginx\" limit:1 ","response":"range_response_count:1 size:849"}
	{"level":"info","ts":"2025-09-08T13:41:18.272228Z","caller":"traceutil/trace.go:172","msg":"trace[307411211] range","detail":"{range_begin:/registry/namespaces/ingress-nginx; range_end:; response_count:1; response_revision:579; }","duration":"176.387456ms","start":"2025-09-08T13:41:18.095797Z","end":"2025-09-08T13:41:18.272184Z","steps":["trace[307411211] 'agreement among raft nodes before linearized reading'  (duration: 154.458814ms)","trace[307411211] 'range keys from in-memory index tree'  (duration: 21.679539ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T13:41:18.281186Z","caller":"traceutil/trace.go:172","msg":"trace[1453730920] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"193.37504ms","start":"2025-09-08T13:41:18.087737Z","end":"2025-09-08T13:41:18.281112Z","steps":["trace[1453730920] 'process raft request'  (duration: 160.436252ms)","trace[1453730920] 'compare'  (duration: 18.038142ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T13:41:18.281866Z","caller":"traceutil/trace.go:172","msg":"trace[244369698] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"152.527654ms","start":"2025-09-08T13:41:18.129329Z","end":"2025-09-08T13:41:18.281856Z","steps":["trace[244369698] 'process raft request'  (duration: 146.286336ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:41:18.282122Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.857234ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/csi-hostpathplugin-provisioner-role\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-08T13:41:18.282186Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.894863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-kldqv\" limit:1 ","response":"range_response_count:1 size:7157"}
	{"level":"info","ts":"2025-09-08T13:41:18.282246Z","caller":"traceutil/trace.go:172","msg":"trace[446899083] range","detail":"{range_begin:/registry/pods/gadget/gadget-kldqv; range_end:; response_count:1; response_revision:584; }","duration":"152.950995ms","start":"2025-09-08T13:41:18.129277Z","end":"2025-09-08T13:41:18.282228Z","steps":["trace[446899083] 'agreement among raft nodes before linearized reading'  (duration: 152.777183ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:41:18.282221Z","caller":"traceutil/trace.go:172","msg":"trace[436446437] range","detail":"{range_begin:/registry/rolebindings/kube-system/csi-hostpathplugin-provisioner-role; range_end:; response_count:0; response_revision:584; }","duration":"152.968251ms","start":"2025-09-08T13:41:18.129243Z","end":"2025-09-08T13:41:18.282211Z","steps":["trace[436446437] 'agreement among raft nodes before linearized reading'  (duration: 152.828212ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:41:18.282409Z","caller":"traceutil/trace.go:172","msg":"trace[437679061] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"135.884156ms","start":"2025-09-08T13:41:18.146515Z","end":"2025-09-08T13:41:18.282399Z","steps":["trace[437679061] 'process raft request'  (duration: 135.132499ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:41:18.282540Z","caller":"traceutil/trace.go:172","msg":"trace[1490017462] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"109.56516ms","start":"2025-09-08T13:41:18.172962Z","end":"2025-09-08T13:41:18.282527Z","steps":["trace[1490017462] 'process raft request'  (duration: 108.721825ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T13:41:18.282814Z","caller":"traceutil/trace.go:172","msg":"trace[480968927] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"106.814459ms","start":"2025-09-08T13:41:18.175990Z","end":"2025-09-08T13:41:18.282805Z","steps":["trace[480968927] 'process raft request'  (duration: 105.761291ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T13:41:20.034717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:20.050609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:41.786656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:41.814066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:41.853002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:41:41.881660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50890","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T13:51:02.158866Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1859}
	{"level":"info","ts":"2025-09-08T13:51:02.187809Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1859,"took":"28.073144ms","hash":63255730,"current-db-size-bytes":5959680,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3731456,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2025-09-08T13:51:02.187871Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":63255730,"revision":1859,"compact-revision":-1}
	
	
	==> kernel <==
	 13:51:27 up  3:33,  0 users,  load average: 0.49, 1.04, 2.67
	Linux addons-242829 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [8a649dce93b099bfe465ca8bf631a3ab9344d9ecd2851723430c54a461799248] <==
	I0908 13:49:25.995541       1 main.go:301] handling current node
	I0908 13:49:35.995032       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:49:35.995080       1 main.go:301] handling current node
	I0908 13:49:45.994965       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:49:45.995103       1 main.go:301] handling current node
	I0908 13:49:55.995519       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:49:55.995552       1 main.go:301] handling current node
	I0908 13:50:05.995768       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:50:05.995802       1 main.go:301] handling current node
	I0908 13:50:15.995286       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:50:15.995319       1 main.go:301] handling current node
	I0908 13:50:25.995221       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:50:25.995256       1 main.go:301] handling current node
	I0908 13:50:35.995345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:50:35.995474       1 main.go:301] handling current node
	I0908 13:50:45.994805       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:50:45.994987       1 main.go:301] handling current node
	I0908 13:50:55.994895       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:50:55.994931       1 main.go:301] handling current node
	I0908 13:51:05.995685       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:51:05.995800       1 main.go:301] handling current node
	I0908 13:51:15.994990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:51:15.995026       1 main.go:301] handling current node
	I0908 13:51:25.995528       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:51:25.995652       1 main.go:301] handling current node
	
	
	==> kube-apiserver [02451464eed55f46245467264da9f81e893ffaba41ab356a8b58aeb45dd426cd] <==
	E0908 13:42:27.970187       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.219.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.219.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.219.191:443: connect: connection refused" logger="UnhandledError"
	E0908 13:42:27.973609       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.219.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.219.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.219.191:443: connect: connection refused" logger="UnhandledError"
	I0908 13:42:28.067095       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0908 13:43:28.686070       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:43:35.820022       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 13:43:49.230535       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55024: use of closed network connection
	E0908 13:43:49.374290       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55032: use of closed network connection
	I0908 13:43:58.797296       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.107.167"}
	I0908 13:44:36.165246       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:44:47.858917       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 13:45:14.280437       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0908 13:45:28.986446       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0908 13:45:41.566400       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0908 13:45:41.866596       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.55.213"}
	I0908 13:45:56.261394       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:46:15.784502       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:47:11.378050       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:47:23.026013       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:48:12.850381       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:48:43.199073       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:49:28.656018       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:49:59.970148       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:50:41.654127       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 13:51:04.776555       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 13:51:15.794570       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [4c1bb6d039fd7710a8f3509fee628e1fd01910f45fd2910bb0a58b87a7862478] <==
	I0908 13:41:11.810383       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0908 13:41:11.810548       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0908 13:41:11.810662       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 13:41:11.811095       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0908 13:41:11.811138       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0908 13:41:11.811151       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0908 13:41:11.813013       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0908 13:41:11.813109       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 13:41:11.819238       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0908 13:41:11.820216       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0908 13:41:11.820291       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	E0908 13:41:17.389129       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E0908 13:41:41.779116       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:41:41.779272       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0908 13:41:41.779310       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0908 13:41:41.814553       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0908 13:41:41.818927       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0908 13:41:41.880174       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:41:41.919295       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:41:56.814070       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0908 13:42:11.886919       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0908 13:42:11.940489       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0908 13:44:02.842941       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I0908 13:44:50.642259       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0908 13:44:54.881185       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [f8d8b89978600a891f8c9d71e92fb37d8c941ffb4c991b43cd45a709b66d1678] <==
	I0908 13:41:18.069957       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:41:18.616656       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:41:18.840155       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:41:18.841296       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 13:41:18.847631       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:41:19.351869       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:41:19.352004       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:41:19.359683       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:41:19.360098       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:41:19.360300       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:41:19.362144       1 config.go:200] "Starting service config controller"
	I0908 13:41:19.362210       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:41:19.362253       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:41:19.362282       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:41:19.362333       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:41:19.362361       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:41:19.363021       1 config.go:309] "Starting node config controller"
	I0908 13:41:19.363073       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:41:19.363106       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:41:19.464639       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:41:19.472552       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:41:19.472599       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [be8406ca5c1ed513a426f97c0170a6d3e01685f7aea08216f53df76eda0a9eef] <==
	I0908 13:41:05.711869       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:41:05.716382       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:41:05.716439       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:41:05.717258       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:41:05.717499       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0908 13:41:05.730813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0908 13:41:05.740293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 13:41:05.740375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 13:41:05.740423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 13:41:05.740473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 13:41:05.740521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 13:41:05.740484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 13:41:05.740567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 13:41:05.740627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 13:41:05.740664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 13:41:05.740770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 13:41:05.740790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 13:41:05.740832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 13:41:05.741398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 13:41:05.745418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 13:41:05.745556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0908 13:41:05.745616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0908 13:41:05.746443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 13:41:05.747317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0908 13:41:06.816715       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 13:50:37 addons-242829 kubelet[1528]: E0908 13:50:37.453404    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9e4cf025-f78d-404a-99ec-0372996d1d4e"
	Sep 08 13:50:37 addons-242829 kubelet[1528]: E0908 13:50:37.904288    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339437903930279 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:50:37 addons-242829 kubelet[1528]: E0908 13:50:37.904325    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339437903930279 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:50:43 addons-242829 kubelet[1528]: E0908 13:50:43.595826    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="50b60ff4-d270-4f50-8369-f843eaa5c9bf"
	Sep 08 13:50:44 addons-242829 kubelet[1528]: E0908 13:50:44.595423    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-sptcc" podUID="e6410ae0-5f50-4418-9999-9f6c35305dd4"
	Sep 08 13:50:47 addons-242829 kubelet[1528]: I0908 13:50:47.596436    1528 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 13:50:47 addons-242829 kubelet[1528]: E0908 13:50:47.907012    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339447906715049 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:50:47 addons-242829 kubelet[1528]: E0908 13:50:47.907045    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339447906715049 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:50:50 addons-242829 kubelet[1528]: E0908 13:50:50.596592    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9e4cf025-f78d-404a-99ec-0372996d1d4e"
	Sep 08 13:50:55 addons-242829 kubelet[1528]: E0908 13:50:55.595489    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="50b60ff4-d270-4f50-8369-f843eaa5c9bf"
	Sep 08 13:50:57 addons-242829 kubelet[1528]: E0908 13:50:57.525247    1528 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1cb041b2c06a3f39cf3755e7593be8efe3f3a4ae0a4ae0ab510c1936e773a877/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1cb041b2c06a3f39cf3755e7593be8efe3f3a4ae0a4ae0ab510c1936e773a877/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 13:50:57 addons-242829 kubelet[1528]: E0908 13:50:57.909921    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339457909653391 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:50:57 addons-242829 kubelet[1528]: E0908 13:50:57.909959    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339457909653391 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:51:02 addons-242829 kubelet[1528]: E0908 13:51:02.597491    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9e4cf025-f78d-404a-99ec-0372996d1d4e"
	Sep 08 13:51:07 addons-242829 kubelet[1528]: E0908 13:51:07.596986    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="50b60ff4-d270-4f50-8369-f843eaa5c9bf"
	Sep 08 13:51:07 addons-242829 kubelet[1528]: E0908 13:51:07.709338    1528 container_manager_linux.go:562] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21, memory: /docker/3fa9656c3a0378917f88f5815c954bb94c3ed3600ac01c422545abcea932bd21/system.slice/kubelet.service"
	Sep 08 13:51:07 addons-242829 kubelet[1528]: E0908 13:51:07.912279    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339467911960707 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:51:07 addons-242829 kubelet[1528]: E0908 13:51:07.912317    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339467911960707 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:51:14 addons-242829 kubelet[1528]: E0908 13:51:14.596826    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9e4cf025-f78d-404a-99ec-0372996d1d4e"
	Sep 08 13:51:17 addons-242829 kubelet[1528]: E0908 13:51:17.915101    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339477914835320 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:51:17 addons-242829 kubelet[1528]: E0908 13:51:17.915140    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339477914835320 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:51:19 addons-242829 kubelet[1528]: E0908 13:51:19.597497    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="50b60ff4-d270-4f50-8369-f843eaa5c9bf"
	Sep 08 13:51:27 addons-242829 kubelet[1528]: E0908 13:51:27.596244    1528 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9e4cf025-f78d-404a-99ec-0372996d1d4e"
	Sep 08 13:51:27 addons-242829 kubelet[1528]: E0908 13:51:27.917859    1528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757339487917570293 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	Sep 08 13:51:27 addons-242829 kubelet[1528]: E0908 13:51:27.917895    1528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757339487917570293 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:535895} inodes_used:{value:213}}"
	
	
	==> storage-provisioner [fc30762357fd66cdc91ee738b3b766c1a664d382b7c7cd12d9aa361de75a3055] <==
	W0908 13:51:02.728652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:04.731635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:04.739287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:06.746074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:06.750908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:08.753821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:08.760479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:10.764527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:10.769519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:12.772734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:12.777144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:14.779823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:14.784422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:16.787529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:16.794001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:18.797035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:18.801245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:20.804992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:20.810021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:22.812871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:22.817388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:24.820551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:24.827238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:26.832419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:51:26.838815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-242829 -n addons-242829
helpers_test.go:269: (dbg) Run:  kubectl --context addons-242829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-4f2x7 ingress-nginx-admission-patch-9t8cq registry-creds-764b6fb674-sptcc
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-242829 describe pod nginx task-pv-pod ingress-nginx-admission-create-4f2x7 ingress-nginx-admission-patch-9t8cq registry-creds-764b6fb674-sptcc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-242829 describe pod nginx task-pv-pod ingress-nginx-admission-create-4f2x7 ingress-nginx-admission-patch-9t8cq registry-creds-764b6fb674-sptcc: exit status 1 (98.654476ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-242829/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:45:41 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kp6lf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kp6lf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m47s                default-scheduler  Successfully assigned default/nginx to addons-242829
	  Warning  Failed     3m17s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    94s (x4 over 5m46s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     51s (x3 over 4m48s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     51s (x4 over 4m48s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x8 over 4m47s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     1s (x8 over 4m47s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-242829/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:45:25 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wk9js (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-wk9js:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m3s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-242829
	  Warning  Failed     2m47s (x2 over 4m17s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    112s (x4 over 6m3s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     82s (x2 over 5m18s)    kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     82s (x4 over 5m18s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x11 over 5m18s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     9s (x11 over 5m18s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4f2x7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9t8cq" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-sptcc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-242829 describe pod nginx task-pv-pod ingress-nginx-admission-create-4f2x7 ingress-nginx-admission-patch-9t8cq registry-creds-764b6fb674-sptcc: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-242829 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.905055365s)
--- FAIL: TestAddons/parallel/CSI (391.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-026028 --alsologtostderr -v=1]
E0908 14:13:37.816318  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-026028 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-026028 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-026028 --alsologtostderr -v=1] stderr:
I0908 14:12:40.098484  741488 out.go:360] Setting OutFile to fd 1 ...
I0908 14:12:40.100601  741488 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:12:40.100625  741488 out.go:374] Setting ErrFile to fd 2...
I0908 14:12:40.100633  741488 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:12:40.100985  741488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
I0908 14:12:40.101338  741488 mustload.go:65] Loading cluster: functional-026028
I0908 14:12:40.101810  741488 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:12:40.102312  741488 cli_runner.go:164] Run: docker container inspect functional-026028 --format={{.State.Status}}
I0908 14:12:40.122330  741488 host.go:66] Checking if "functional-026028" exists ...
I0908 14:12:40.122754  741488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0908 14:12:40.188286  741488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 14:12:40.17639964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0908 14:12:40.188407  741488 api_server.go:166] Checking apiserver status ...
I0908 14:12:40.188492  741488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0908 14:12:40.188541  741488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
I0908 14:12:40.206847  741488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
I0908 14:12:40.299979  741488 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4630/cgroup
I0908 14:12:40.309749  741488 api_server.go:182] apiserver freezer: "12:freezer:/docker/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/crio/crio-479de62c600bdfc4c16b2ac92c2f4d118173fa5dab07926f7db04d866c0e8397"
I0908 14:12:40.309840  741488 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/crio/crio-479de62c600bdfc4c16b2ac92c2f4d118173fa5dab07926f7db04d866c0e8397/freezer.state
I0908 14:12:40.318816  741488 api_server.go:204] freezer state: "THAWED"
I0908 14:12:40.318846  741488 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0908 14:12:40.328424  741488 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0908 14:12:40.328469  741488 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0908 14:12:40.328673  741488 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:12:40.328699  741488 addons.go:69] Setting dashboard=true in profile "functional-026028"
I0908 14:12:40.328715  741488 addons.go:238] Setting addon dashboard=true in "functional-026028"
I0908 14:12:40.328744  741488 host.go:66] Checking if "functional-026028" exists ...
I0908 14:12:40.329167  741488 cli_runner.go:164] Run: docker container inspect functional-026028 --format={{.State.Status}}
I0908 14:12:40.351422  741488 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0908 14:12:40.354421  741488 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0908 14:12:40.357147  741488 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0908 14:12:40.357168  741488 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0908 14:12:40.357260  741488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
I0908 14:12:40.377598  741488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
I0908 14:12:40.484392  741488 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0908 14:12:40.484417  741488 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0908 14:12:40.503445  741488 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0908 14:12:40.503469  741488 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0908 14:12:40.522905  741488 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0908 14:12:40.522958  741488 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0908 14:12:40.542961  741488 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0908 14:12:40.542985  741488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0908 14:12:40.564541  741488 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0908 14:12:40.564569  741488 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0908 14:12:40.585297  741488 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0908 14:12:40.585344  741488 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0908 14:12:40.604315  741488 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0908 14:12:40.604339  741488 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0908 14:12:40.623140  741488 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0908 14:12:40.623164  741488 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0908 14:12:40.642235  741488 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0908 14:12:40.642260  741488 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0908 14:12:40.661103  741488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0908 14:12:41.464655  741488 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-026028 addons enable metrics-server

                                                
                                                
I0908 14:12:41.467579  741488 addons.go:201] Writing out "functional-026028" config to set dashboard=true...
W0908 14:12:41.467899  741488 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0908 14:12:41.468596  741488 kapi.go:59] client config for functional-026028: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt", KeyFile:"/home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.key", CAFile:"/home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f2d7d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0908 14:12:41.469136  741488 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0908 14:12:41.469229  741488 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0908 14:12:41.469247  741488 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0908 14:12:41.469274  741488 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0908 14:12:41.469296  741488 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0908 14:12:41.487765  741488 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  8ea6bc94-b3eb-4033-a420-71e38ac6c271 1771 0 2025-09-08 14:12:41 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-08 14:12:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.109.166.23,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.109.166.23],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0908 14:12:41.487955  741488 out.go:285] * Launching proxy ...
* Launching proxy ...
I0908 14:12:41.488030  741488 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-026028 proxy --port 36195]
I0908 14:12:41.488339  741488 dashboard.go:157] Waiting for kubectl to output host:port ...
I0908 14:12:41.542670  741488 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0908 14:12:41.542729  741488 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0908 14:12:41.563284  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4d323342-a818-41a7-af34-8568983517fc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004defc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400073bcc0 TLS:<nil>}
I0908 14:12:41.563356  741488 retry.go:31] will retry after 98.903µs: Temporary Error: unexpected response code: 503
I0908 14:12:41.569377  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4ba26271-3c68-48fa-a356-d539f972e8db] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004df040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400073be00 TLS:<nil>}
I0908 14:12:41.569443  741488 retry.go:31] will retry after 159.828µs: Temporary Error: unexpected response code: 503
I0908 14:12:41.577409  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a8fddb69-fa59-4d92-8a39-19ec8472a3e0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40008f7e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000514000 TLS:<nil>}
I0908 14:12:41.577477  741488 retry.go:31] will retry after 285.178µs: Temporary Error: unexpected response code: 503
I0908 14:12:41.581536  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac084ea8-5c88-482c-9660-3d9ab7559a35] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40008f7ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047a3c0 TLS:<nil>}
I0908 14:12:41.581594  741488 retry.go:31] will retry after 216.499µs: Temporary Error: unexpected response code: 503
I0908 14:12:41.585737  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ddc6dcec-3188-4ea4-bb26-2c447382de0d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40008f7f40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047a500 TLS:<nil>}
I0908 14:12:41.585796  741488 retry.go:31] will retry after 314.829µs: Temporary Error: unexpected response code: 503
I0908 14:12:41.591516  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7dc5c374-68ab-415f-8836-0af5fed53a89] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004e8040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047a640 TLS:<nil>}
I0908 14:12:41.591602  741488 retry.go:31] will retry after 498.469µs: Temporary Error: unexpected response code: 503
I0908 14:12:41.595929  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5664f012-15ea-4d42-819b-3b3943edfbe5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004e80c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047a780 TLS:<nil>}
I0908 14:12:41.595998  741488 retry.go:31] will retry after 1.193128ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.602022  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c4991b84-afcc-4134-90b8-3fc24d0adb67] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004e8140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047a8c0 TLS:<nil>}
I0908 14:12:41.602094  741488 retry.go:31] will retry after 1.611045ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.611752  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e9f97cbd-8686-45ba-8b87-f015131a03a3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004e81c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047aa00 TLS:<nil>}
I0908 14:12:41.611818  741488 retry.go:31] will retry after 2.987526ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.618458  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f345b815-a7e2-4a58-9a93-877f9f2687c3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004e8240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047ab40 TLS:<nil>}
I0908 14:12:41.618519  741488 retry.go:31] will retry after 2.618048ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.625052  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a0027896-9ab1-4093-a1e7-00b64307ced9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004df740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000514140 TLS:<nil>}
I0908 14:12:41.625114  741488 retry.go:31] will retry after 6.630946ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.636984  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b857ba51-d5ec-4084-99b9-a8273d84e972] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004df800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40005143c0 TLS:<nil>}
I0908 14:12:41.637052  741488 retry.go:31] will retry after 5.078585ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.647364  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e1c666b6-b979-4e7d-af1d-49be2ef289d5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004df880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000514500 TLS:<nil>}
I0908 14:12:41.647433  741488 retry.go:31] will retry after 7.061067ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.657627  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[952f4345-d75b-435c-b09e-4c0810470090] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004df980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000514640 TLS:<nil>}
I0908 14:12:41.657692  741488 retry.go:31] will retry after 22.967002ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.684211  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[574514a5-4172-40db-946e-40380f463bff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004dfa00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000514780 TLS:<nil>}
I0908 14:12:41.684285  741488 retry.go:31] will retry after 38.631406ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.726545  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea3b81dd-71fa-4cc8-99b3-e1cde96482e6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004dfa80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40005148c0 TLS:<nil>}
I0908 14:12:41.726608  741488 retry.go:31] will retry after 23.026639ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.753403  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2dda2be0-83e9-4cfc-9761-0ad5115be435] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004dfc40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000514b40 TLS:<nil>}
I0908 14:12:41.753466  741488 retry.go:31] will retry after 53.485433ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.811135  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7db120c-7f9d-4494-8f85-6347ea5abe40] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x40004e8640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40005152c0 TLS:<nil>}
I0908 14:12:41.811244  741488 retry.go:31] will retry after 59.693603ms: Temporary Error: unexpected response code: 503
I0908 14:12:41.874601  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fb69f547-ce6d-4cf1-bb36-1fcd4c79055f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:41 GMT]] Body:0x4000563040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000515400 TLS:<nil>}
I0908 14:12:41.874672  741488 retry.go:31] will retry after 136.423591ms: Temporary Error: unexpected response code: 503
I0908 14:12:42.016929  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1917d282-ad36-4614-8647-cac7d8b0b470] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:42 GMT]] Body:0x40004e8780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047ac80 TLS:<nil>}
I0908 14:12:42.017012  741488 retry.go:31] will retry after 315.655473ms: Temporary Error: unexpected response code: 503
I0908 14:12:42.336812  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d0a9717c-3081-491d-a261-3f185b1314b4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:42 GMT]] Body:0x4000563100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000515540 TLS:<nil>}
I0908 14:12:42.336887  741488 retry.go:31] will retry after 440.758952ms: Temporary Error: unexpected response code: 503
I0908 14:12:42.781642  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c2513c14-9845-4257-8d7a-8fd62e4a1114] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:42 GMT]] Body:0x40004e8880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047adc0 TLS:<nil>}
I0908 14:12:42.781705  741488 retry.go:31] will retry after 571.091972ms: Temporary Error: unexpected response code: 503
I0908 14:12:43.356511  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cb04ba77-b445-4a28-8008-97878255bc27] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:43 GMT]] Body:0x4000563500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000515680 TLS:<nil>}
I0908 14:12:43.356577  741488 retry.go:31] will retry after 423.241511ms: Temporary Error: unexpected response code: 503
I0908 14:12:43.783035  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8f4b6b99-a594-4d64-9238-f4f2814e1183] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:43 GMT]] Body:0x40005635c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047af00 TLS:<nil>}
I0908 14:12:43.783098  741488 retry.go:31] will retry after 592.444748ms: Temporary Error: unexpected response code: 503
I0908 14:12:44.379015  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2ce71fde-96de-460d-b228-ccbf4ca63c70] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:44 GMT]] Body:0x40005636c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047b040 TLS:<nil>}
I0908 14:12:44.379089  741488 retry.go:31] will retry after 1.357806227s: Temporary Error: unexpected response code: 503
I0908 14:12:45.739982  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[58fefeef-4c91-4204-be72-ecb5c2cb0145] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:45 GMT]] Body:0x4000563780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40005157c0 TLS:<nil>}
I0908 14:12:45.740052  741488 retry.go:31] will retry after 2.430820777s: Temporary Error: unexpected response code: 503
I0908 14:12:48.174117  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a844bfba-2b8d-4028-9c18-eee946af14a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:48 GMT]] Body:0x40004e8a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047b180 TLS:<nil>}
I0908 14:12:48.174177  741488 retry.go:31] will retry after 5.176847315s: Temporary Error: unexpected response code: 503
I0908 14:12:53.354264  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[600bea9e-a2ee-403f-afb3-c04f5db88fc1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:53 GMT]] Body:0x40004e8b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000515900 TLS:<nil>}
I0908 14:12:53.354327  741488 retry.go:31] will retry after 5.18082944s: Temporary Error: unexpected response code: 503
I0908 14:12:58.539822  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[65038474-b7bc-4bf1-8411-49549172025e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:12:58 GMT]] Body:0x40004e8c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000515a40 TLS:<nil>}
I0908 14:12:58.539885  741488 retry.go:31] will retry after 11.07989418s: Temporary Error: unexpected response code: 503
I0908 14:13:09.626442  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[815973cf-7d47-4490-bf13-735b431c2017] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:13:09 GMT]] Body:0x40004e8cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000515b80 TLS:<nil>}
I0908 14:13:09.626515  741488 retry.go:31] will retry after 9.008368807s: Temporary Error: unexpected response code: 503
I0908 14:13:18.639007  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3a26cf78-3d5d-46ac-999a-5fc3177790a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:13:18 GMT]] Body:0x40004e8d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047b2c0 TLS:<nil>}
I0908 14:13:18.639067  741488 retry.go:31] will retry after 16.055686792s: Temporary Error: unexpected response code: 503
I0908 14:13:34.699824  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[89788886-259d-4365-ac90-4133dbe006d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:13:34 GMT]] Body:0x40004e8e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047b680 TLS:<nil>}
I0908 14:13:34.699891  741488 retry.go:31] will retry after 30.324980088s: Temporary Error: unexpected response code: 503
I0908 14:14:05.027888  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c252a96e-e11c-4033-8c1c-945e5a039d71] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:14:05 GMT]] Body:0x40004e8e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047b7c0 TLS:<nil>}
I0908 14:14:05.027952  741488 retry.go:31] will retry after 53.723979487s: Temporary Error: unexpected response code: 503
I0908 14:14:58.755263  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[44fe946f-3a07-4318-82f9-6580fa227686] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:14:58 GMT]] Body:0x40004e8080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400047a000 TLS:<nil>}
I0908 14:14:58.755328  741488 retry.go:31] will retry after 41.865491686s: Temporary Error: unexpected response code: 503
I0908 14:15:40.624601  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[94dc9729-d82a-4536-ac45-47f4600892d0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:15:40 GMT]] Body:0x40004e8140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b4280 TLS:<nil>}
I0908 14:15:40.624671  741488 retry.go:31] will retry after 1m26.010915656s: Temporary Error: unexpected response code: 503
I0908 14:17:06.639500  741488 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7cf1571-60ca-4469-bf46-47735cd031e3] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 08 Sep 2025 14:17:06 GMT]] Body:0x40005630c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b43c0 TLS:<nil>}
I0908 14:17:06.639573  741488 retry.go:31] will retry after 1m14.593462161s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-026028
helpers_test.go:243: (dbg) docker inspect functional-026028:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c",
	        "Created": "2025-09-08T13:55:04.098213063Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 728663,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:55:04.161602565Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/hosts",
	        "LogPath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c-json.log",
	        "Name": "/functional-026028",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-026028:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-026028",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c",
	                "LowerDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c-init/diff:/var/lib/docker/overlay2/e952efc7b05509e28dc878e8f391bd641c7ca24b818c7ca2dd0e02dcca8c51fa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-026028",
	                "Source": "/var/lib/docker/volumes/functional-026028/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-026028",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-026028",
	                "name.minikube.sigs.k8s.io": "functional-026028",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58659b49596eea8f96710b050a5b80485b1833e758b8bc0a01895293a176d953",
	            "SandboxKey": "/var/run/docker/netns/58659b49596e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33542"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33541"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-026028": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:53:c0:f2:5a:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a8a2325d2872d7ed1a25e04b95cca0f711ee83426dbd8e9711f763c04bbc03be",
	                    "EndpointID": "89a4a8506af874b5647adc620b92b262df1cca74fe3226741a99a791d979647b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-026028",
	                        "3e4b4cd85a48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-026028 -n functional-026028
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 logs -n 25: (1.791672099s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                   ARGS                                                    │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-026028 ssh findmnt -T /mount1                                                                  │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh            │ functional-026028 ssh findmnt -T /mount2                                                                  │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh            │ functional-026028 ssh findmnt -T /mount3                                                                  │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ mount          │ -p functional-026028 --kill=true                                                                          │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ addons         │ functional-026028 addons list                                                                             │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │ 08 Sep 25 14:02 UTC │
	│ addons         │ functional-026028 addons list -o json                                                                     │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │ 08 Sep 25 14:02 UTC │
	│ start          │ -p functional-026028 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:12 UTC │                     │
	│ start          │ -p functional-026028 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio           │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:12 UTC │                     │
	│ start          │ -p functional-026028 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:12 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-026028 --alsologtostderr -v=1                                            │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:12 UTC │                     │
	│ service        │ functional-026028 service list                                                                            │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │ 08 Sep 25 14:13 UTC │
	│ service        │ functional-026028 service list -o json                                                                    │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │ 08 Sep 25 14:13 UTC │
	│ service        │ functional-026028 service --namespace=default --https --url hello-node                                    │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │                     │
	│ service        │ functional-026028 service hello-node --url --format={{.IP}}                                               │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │                     │
	│ service        │ functional-026028 service hello-node --url                                                                │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │                     │
	│ image          │ functional-026028 image ls --format short --alsologtostderr                                               │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │ 08 Sep 25 14:13 UTC │
	│ image          │ functional-026028 image ls --format yaml --alsologtostderr                                                │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │ 08 Sep 25 14:13 UTC │
	│ ssh            │ functional-026028 ssh pgrep buildkitd                                                                     │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │                     │
	│ image          │ functional-026028 image build -t localhost/my-image:functional-026028 testdata/build --alsologtostderr    │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │ 08 Sep 25 14:13 UTC │
	│ image          │ functional-026028 image ls                                                                                │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │ 08 Sep 25 14:13 UTC │
	│ image          │ functional-026028 image ls --format json --alsologtostderr                                                │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │ 08 Sep 25 14:13 UTC │
	│ image          │ functional-026028 image ls --format table --alsologtostderr                                               │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │ 08 Sep 25 14:13 UTC │
	│ update-context │ functional-026028 update-context --alsologtostderr -v=2                                                   │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │ 08 Sep 25 14:13 UTC │
	│ update-context │ functional-026028 update-context --alsologtostderr -v=2                                                   │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │ 08 Sep 25 14:13 UTC │
	│ update-context │ functional-026028 update-context --alsologtostderr -v=2                                                   │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:13 UTC │ 08 Sep 25 14:13 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 14:12:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 14:12:39.846773  741443 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:12:39.846950  741443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:12:39.846978  741443 out.go:374] Setting ErrFile to fd 2...
	I0908 14:12:39.847000  741443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:12:39.847392  741443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 14:12:39.847830  741443 out.go:368] Setting JSON to false
	I0908 14:12:39.848774  741443 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14110,"bootTime":1757326650,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 14:12:39.848880  741443 start.go:140] virtualization:  
	I0908 14:12:39.852385  741443 out.go:179] * [functional-026028] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0908 14:12:39.855412  741443 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 14:12:39.855476  741443 notify.go:220] Checking for updates...
	I0908 14:12:39.861486  741443 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:12:39.864356  741443 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 14:12:39.867174  741443 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	I0908 14:12:39.870045  741443 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 14:12:39.872894  741443 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:12:39.876327  741443 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:12:39.876874  741443 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:12:39.911043  741443 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 14:12:39.911154  741443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:12:39.979555  741443 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 14:12:39.970082784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:12:39.979673  741443 docker.go:318] overlay module found
	I0908 14:12:39.982865  741443 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 14:12:39.985643  741443 start.go:304] selected driver: docker
	I0908 14:12:39.985666  741443 start.go:918] validating driver "docker" against &{Name:functional-026028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:12:39.985781  741443 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:12:39.989423  741443 out.go:203] 
	W0908 14:12:39.992309  741443 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 14:12:39.995128  741443 out.go:203] 
	
	
	==> CRI-O <==
	Sep 08 14:16:27 functional-026028 crio[4142]: time="2025-09-08 14:16:27.548536073Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=f1dfea50-d02d-4635-9c3a-547a9da639aa name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:16:27 functional-026028 crio[4142]: time="2025-09-08 14:16:27.548790156Z" level=info msg="Image docker.io/nginx:alpine not found" id=f1dfea50-d02d-4635-9c3a-547a9da639aa name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:16:37 functional-026028 crio[4142]: time="2025-09-08 14:16:37.994758395Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8ffa59eb-384a-4077-8ec5-2edf54c6a616 name=/runtime.v1.ImageService/PullImage
	Sep 08 14:16:37 functional-026028 crio[4142]: time="2025-09-08 14:16:37.995388893Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=4960a9f7-26c9-4d74-8b5c-2666ecbaebf5 name=/runtime.v1.ImageService/PullImage
	Sep 08 14:16:37 functional-026028 crio[4142]: time="2025-09-08 14:16:37.997446318Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 08 14:16:41 functional-026028 crio[4142]: time="2025-09-08 14:16:41.548656554Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=bd70dd30-7f60-4663-8a68-9b7410da62e3 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:16:41 functional-026028 crio[4142]: time="2025-09-08 14:16:41.548900840Z" level=info msg="Image docker.io/nginx:alpine not found" id=bd70dd30-7f60-4663-8a68-9b7410da62e3 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:16:49 functional-026028 crio[4142]: time="2025-09-08 14:16:49.548703491Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=56b2a92a-7447-45ea-acb0-8491badcebb3 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:16:49 functional-026028 crio[4142]: time="2025-09-08 14:16:49.548981386Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=56b2a92a-7447-45ea-acb0-8491badcebb3 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:16:56 functional-026028 crio[4142]: time="2025-09-08 14:16:56.549511875Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7599f7da-769e-481c-ae40-cf799ccb7d4b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:16:56 functional-026028 crio[4142]: time="2025-09-08 14:16:56.549765704Z" level=info msg="Image docker.io/nginx:alpine not found" id=7599f7da-769e-481c-ae40-cf799ccb7d4b name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:03 functional-026028 crio[4142]: time="2025-09-08 14:17:03.548822230Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=32bf2ae1-eba3-455f-87f6-6e40d81ac1d0 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:03 functional-026028 crio[4142]: time="2025-09-08 14:17:03.549117092Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=32bf2ae1-eba3-455f-87f6-6e40d81ac1d0 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:08 functional-026028 crio[4142]: time="2025-09-08 14:17:08.261844857Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c1ca0c61-0dd6-4d26-b9b3-36ad46cc6a15 name=/runtime.v1.ImageService/PullImage
	Sep 08 14:17:08 functional-026028 crio[4142]: time="2025-09-08 14:17:08.263024029Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 08 14:17:10 functional-026028 crio[4142]: time="2025-09-08 14:17:10.548442219Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=06ae0ce3-7233-4941-9575-deb724aba51f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:10 functional-026028 crio[4142]: time="2025-09-08 14:17:10.548658435Z" level=info msg="Image docker.io/nginx:alpine not found" id=06ae0ce3-7233-4941-9575-deb724aba51f name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:22 functional-026028 crio[4142]: time="2025-09-08 14:17:22.550251398Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fc44a80d-b6c4-4320-bbf9-343cec792985 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:22 functional-026028 crio[4142]: time="2025-09-08 14:17:22.550495487Z" level=info msg="Image docker.io/nginx:alpine not found" id=fc44a80d-b6c4-4320-bbf9-343cec792985 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:23 functional-026028 crio[4142]: time="2025-09-08 14:17:23.548036904Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=0a0067cd-fc75-4669-9ea6-15c5cd1b4fd0 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:23 functional-026028 crio[4142]: time="2025-09-08 14:17:23.548366138Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=0a0067cd-fc75-4669-9ea6-15c5cd1b4fd0 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:35 functional-026028 crio[4142]: time="2025-09-08 14:17:35.548975492Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=efc17010-5d35-427c-a260-7989c4725751 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:35 functional-026028 crio[4142]: time="2025-09-08 14:17:35.549197714Z" level=info msg="Image docker.io/nginx:alpine not found" id=efc17010-5d35-427c-a260-7989c4725751 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:36 functional-026028 crio[4142]: time="2025-09-08 14:17:36.548802500Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=e97b5a20-e66c-43f8-8662-6217d73aeb92 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 14:17:36 functional-026028 crio[4142]: time="2025-09-08 14:17:36.549078901Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=e97b5a20-e66c-43f8-8662-6217d73aeb92 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	052077fb4ef70       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Exited              mount-munger              0                   2fbae193fa4cb       busybox-mount
	087d685e5ffee       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      20 minutes ago      Running             kindnet-cni               2                   6914497ef7bdc       kindnet-p7x2n
	ef9c2d75c43bc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      20 minutes ago      Running             coredns                   2                   93bd1427e2d1c       coredns-66bc5c9577-xvrzf
	ff0cf621523f4       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                      20 minutes ago      Running             kube-proxy                2                   2acff923b73b1       kube-proxy-jfwzk
	706359fe97442       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      20 minutes ago      Running             storage-provisioner       2                   44e155e5db78f       storage-provisioner
	479de62c600bd       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                      20 minutes ago      Running             kube-apiserver            0                   61964370a85e0       kube-apiserver-functional-026028
	7878d0468509f       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                      20 minutes ago      Running             kube-scheduler            2                   1953fe0dc0ed5       kube-scheduler-functional-026028
	61d1b9b13b99c       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                      20 minutes ago      Running             kube-controller-manager   2                   3553b89b2b9ea       kube-controller-manager-functional-026028
	6543937edb124       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      20 minutes ago      Running             etcd                      2                   3af73614df28e       etcd-functional-026028
	a2eb7609df02e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      21 minutes ago      Exited              etcd                      1                   3af73614df28e       etcd-functional-026028
	4671e514f80d2       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                      21 minutes ago      Exited              kube-scheduler            1                   1953fe0dc0ed5       kube-scheduler-functional-026028
	88e7510fea0d0       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                      21 minutes ago      Exited              kube-proxy                1                   2acff923b73b1       kube-proxy-jfwzk
	aad81915237f9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      21 minutes ago      Exited              storage-provisioner       1                   44e155e5db78f       storage-provisioner
	b1b3488e207d5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      21 minutes ago      Exited              coredns                   1                   93bd1427e2d1c       coredns-66bc5c9577-xvrzf
	683391a1836d7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      21 minutes ago      Exited              kindnet-cni               1                   6914497ef7bdc       kindnet-p7x2n
	705f0150212b2       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                      21 minutes ago      Exited              kube-controller-manager   1                   3553b89b2b9ea       kube-controller-manager-functional-026028
	
	
	==> coredns [b1b3488e207d50f46ada696df617b63e45f9702da68cf49562cac4d36d726359] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51367 - 38383 "HINFO IN 4930380607319425861.1996465060149016207. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024389354s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ef9c2d75c43bcf2e3faf1eeaabc33b72e42c76c32c5e735cdae61265db80b13e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42338 - 56776 "HINFO IN 2397839777711660057.6986415456635719998. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013940437s
	
	
	==> describe nodes <==
	Name:               functional-026028
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-026028
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4237956cfce90d4ab760d817400bd4c89cad50d6
	                    minikube.k8s.io/name=functional-026028
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T13_55_28_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 13:55:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-026028
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 14:17:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 14:14:17 +0000   Mon, 08 Sep 2025 13:55:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 14:14:17 +0000   Mon, 08 Sep 2025 13:55:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 14:14:17 +0000   Mon, 08 Sep 2025 13:55:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 14:14:17 +0000   Mon, 08 Sep 2025 13:56:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-026028
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 accd43a8bbb84cf99e932300896ee6a5
	  System UUID:                cbd1d5ac-baca-427c-b2f1-bdfdb249d93d
	  Boot ID:                    3b69f852-7505-47f7-82de-581d66319e23
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-fbgrs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-node-connect-7d85dfc575-wrmb2           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-xvrzf                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     22m
	  kube-system                 etcd-functional-026028                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         22m
	  kube-system                 kindnet-p7x2n                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-functional-026028              250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-functional-026028     200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-jfwzk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-functional-026028              100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-rx9tp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-j67cq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 22m                kube-proxy       
	  Normal   Starting                 20m                kube-proxy       
	  Normal   Starting                 21m                kube-proxy       
	  Warning  CgroupV1                 22m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 22m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22m                kubelet          Node functional-026028 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22m                kubelet          Node functional-026028 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22m                kubelet          Node functional-026028 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           22m                node-controller  Node functional-026028 event: Registered Node functional-026028 in Controller
	  Normal   NodeReady                21m                kubelet          Node functional-026028 status is now: NodeReady
	  Normal   RegisteredNode           21m                node-controller  Node functional-026028 event: Registered Node functional-026028 in Controller
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node functional-026028 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node functional-026028 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x8 over 20m)  kubelet          Node functional-026028 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           20m                node-controller  Node functional-026028 event: Registered Node functional-026028 in Controller
	
	
	==> dmesg <==
	[Sep 8 12:35] FS-Cache: Duplicate cookie detected
	[  +0.000684] FS-Cache: O-cookie c=00000025 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000907] FS-Cache: O-cookie d=00000000f75621f8{9P.session} n=000000002e0501ee
	[  +0.001029] FS-Cache: O-key=[10] '34323936393639353436'
	[  +0.000727] FS-Cache: N-cookie c=00000026 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000883] FS-Cache: N-cookie d=00000000f75621f8{9P.session} n=00000000ccfa13d2
	[  +0.001067] FS-Cache: N-key=[10] '34323936393639353436'
	[Sep 8 13:39] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [6543937edb124927e8322955a3882747c11013e9a20ff2c2be11dcf67e4ba0f4] <==
	{"level":"warn","ts":"2025-09-08T13:57:15.568351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.630436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.654164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.679033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.707428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.741527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.762808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.792500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.813945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.844273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.874179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.911055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.953842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:15.988419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:16.050444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:57:16.164533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33736","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T14:07:14.376209Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2025-09-08T14:07:14.384703Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":959,"took":"8.096723ms","hash":1870317089,"current-db-size-bytes":3207168,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":3207168,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2025-09-08T14:07:14.384756Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1870317089,"revision":959,"compact-revision":-1}
	{"level":"info","ts":"2025-09-08T14:12:14.383594Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1414}
	{"level":"info","ts":"2025-09-08T14:12:14.388123Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1414,"took":"4.133006ms","hash":1149694948,"current-db-size-bytes":3207168,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":2080768,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-08T14:12:14.388177Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1149694948,"revision":1414,"compact-revision":959}
	{"level":"info","ts":"2025-09-08T14:17:14.390118Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1694}
	{"level":"info","ts":"2025-09-08T14:17:14.393992Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1694,"took":"3.457081ms","hash":334947810,"current-db-size-bytes":3207168,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":2068480,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-09-08T14:17:14.394043Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":334947810,"revision":1694,"compact-revision":1414}
	
	
	==> etcd [a2eb7609df02e8c01390cf4d97da95e5398239a66e09a3e5323636481ddd3508] <==
	{"level":"warn","ts":"2025-09-08T13:56:30.321062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:56:30.343350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:56:30.362725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:56:30.405510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:56:30.432227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:56:30.448902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T13:56:30.552242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38796","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-08T13:56:57.089663Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-08T13:56:57.089849Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-026028","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-08T13:56:57.089941Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T13:56:57.239711Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-08T13:56:57.241185Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T13:56:57.241234Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-08T13:56:57.241302Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-08T13:56:57.241321Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-08T13:56:57.241297Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T13:56:57.241396Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T13:56:57.241430Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-08T13:56:57.241519Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-08T13:56:57.241540Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-08T13:56:57.241550Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T13:56:57.245229Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-08T13:56:57.245422Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-08T13:56:57.245469Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-08T13:56:57.245477Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-026028","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 14:17:41 up  4:00,  0 users,  load average: 0.11, 0.21, 0.72
	Linux functional-026028 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [087d685e5ffee175027d77b016086ca994df1e88b17cc70943dfc7c903746f3d] <==
	I0908 14:15:38.342250       1 main.go:301] handling current node
	I0908 14:15:48.341453       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:15:48.341486       1 main.go:301] handling current node
	I0908 14:15:58.341239       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:15:58.341300       1 main.go:301] handling current node
	I0908 14:16:08.341308       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:16:08.341457       1 main.go:301] handling current node
	I0908 14:16:18.342632       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:16:18.342757       1 main.go:301] handling current node
	I0908 14:16:28.343715       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:16:28.343842       1 main.go:301] handling current node
	I0908 14:16:38.349644       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:16:38.349680       1 main.go:301] handling current node
	I0908 14:16:48.348165       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:16:48.348295       1 main.go:301] handling current node
	I0908 14:16:58.343751       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:16:58.343787       1 main.go:301] handling current node
	I0908 14:17:08.341207       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:17:08.341341       1 main.go:301] handling current node
	I0908 14:17:18.340966       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:17:18.341009       1 main.go:301] handling current node
	I0908 14:17:28.340516       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:17:28.340651       1 main.go:301] handling current node
	I0908 14:17:38.341305       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 14:17:38.341337       1 main.go:301] handling current node
	
	
	==> kindnet [683391a1836d7a3ce9c858ef82f6a1f2482bf2b8552e335a29ed0476f25e53ed] <==
	I0908 13:56:27.766058       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0908 13:56:27.833404       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0908 13:56:27.834333       1 main.go:148] setting mtu 1500 for CNI 
	I0908 13:56:27.834627       1 main.go:178] kindnetd IP family: "ipv4"
	I0908 13:56:27.834659       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-08T13:56:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0908 13:56:28.048125       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0908 13:56:28.048154       1 controller.go:381] "Waiting for informer caches to sync"
	I0908 13:56:28.048184       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0908 13:56:28.052348       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0908 13:56:31.800461       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0908 13:56:31.800680       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0908 13:56:31.800774       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0908 13:56:31.800861       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0908 13:56:33.349111       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0908 13:56:33.349221       1 metrics.go:72] Registering metrics
	I0908 13:56:33.349316       1 controller.go:711] "Syncing nftables rules"
	I0908 13:56:38.050272       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:56:38.050312       1 main.go:301] handling current node
	I0908 13:56:48.047649       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 13:56:48.047683       1 main.go:301] handling current node
	
	
	==> kube-apiserver [479de62c600bdfc4c16b2ac92c2f4d118173fa5dab07926f7db04d866c0e8397] <==
	I0908 14:05:27.228556       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:05:45.869796       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:06:39.754065       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:07:03.315442       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:07:17.116100       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 14:07:57.040987       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:08:14.182805       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:09:02.041625       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:09:26.754362       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:10:30.407885       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:10:49.016574       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:11:46.462379       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:12:05.872560       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:12:41.096159       1 controller.go:667] quota admission added evaluator for: namespaces
	I0908 14:12:41.431392       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.166.23"}
	I0908 14:12:41.454594       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.167.186"}
	I0908 14:13:12.345522       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:13:18.418996       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:14:32.151677       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:14:36.438670       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:15:53.418083       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:16:06.344341       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:17:12.151517       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 14:17:17.116127       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0908 14:17:25.173985       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [61d1b9b13b99c12965a460927327f7ef96151bb139d8d5c9e9fddd986f03e123] <==
	I0908 13:57:20.556270       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 13:57:20.557017       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0908 13:57:20.561584       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 13:57:20.565755       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:57:20.567014       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0908 13:57:20.568202       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0908 13:57:20.570996       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 13:57:20.571112       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:57:20.571146       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 13:57:20.571177       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 13:57:20.577807       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 13:57:20.587261       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0908 13:57:20.589673       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:57:20.602351       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0908 13:57:20.607558       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0908 13:57:20.608764       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0908 13:57:20.610850       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	E0908 14:12:41.194109       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 14:12:41.214436       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 14:12:41.214734       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 14:12:41.225408       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 14:12:41.228915       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 14:12:41.233147       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 14:12:41.237603       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0908 14:12:41.242712       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [705f0150212b231d01218ac1b3bae404294054472077925f98a204294722c0ec] <==
	I0908 13:56:35.005006       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0908 13:56:35.005083       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0908 13:56:35.006448       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0908 13:56:35.006892       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0908 13:56:35.009202       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0908 13:56:35.011407       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0908 13:56:35.011449       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:56:35.014705       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 13:56:35.014732       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0908 13:56:35.014740       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0908 13:56:35.019740       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0908 13:56:35.020928       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0908 13:56:35.026199       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0908 13:56:35.041611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0908 13:56:35.041822       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0908 13:56:35.041891       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0908 13:56:35.041904       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0908 13:56:35.041915       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0908 13:56:35.045662       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0908 13:56:35.045714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0908 13:56:35.045735       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0908 13:56:35.045740       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0908 13:56:35.045745       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0908 13:56:35.050147       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0908 13:56:35.053442       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-proxy [88e7510fea0d0c4274fd375cb64efbb60c3b9d907dc556400874c9ec7791a1bd] <==
	I0908 13:56:31.653273       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:56:32.278001       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:56:32.382782       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:56:32.382821       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 13:56:32.382903       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:56:32.737377       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:56:32.737510       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:56:32.753281       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:56:32.753662       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:56:32.757290       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:56:32.759677       1 config.go:200] "Starting service config controller"
	I0908 13:56:32.763666       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:56:32.762686       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:56:32.763781       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:56:32.762706       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:56:32.763859       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:56:32.763394       1 config.go:309] "Starting node config controller"
	I0908 13:56:32.763917       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:56:32.763947       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:56:32.864913       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:56:32.865030       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:56:32.865044       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ff0cf621523f4ef7fffcdb831e2aa9b2fa900bab1729935763bee5661d2d3e73] <==
	I0908 13:57:18.253564       1 server_linux.go:53] "Using iptables proxy"
	I0908 13:57:18.345898       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 13:57:18.446392       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 13:57:18.446442       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 13:57:18.446524       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 13:57:18.467838       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 13:57:18.467892       1 server_linux.go:132] "Using iptables Proxier"
	I0908 13:57:18.472700       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 13:57:18.472992       1 server.go:527] "Version info" version="v1.34.0"
	I0908 13:57:18.473017       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:57:18.474086       1 config.go:200] "Starting service config controller"
	I0908 13:57:18.474104       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 13:57:18.477281       1 config.go:106] "Starting endpoint slice config controller"
	I0908 13:57:18.477302       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 13:57:18.477325       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 13:57:18.477330       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 13:57:18.478040       1 config.go:309] "Starting node config controller"
	I0908 13:57:18.478108       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 13:57:18.478139       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 13:57:18.574520       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 13:57:18.577679       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0908 13:57:18.577689       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4671e514f80d2410cc9da6b4fd10d2e61df7be909170bcda3b86abc8d9f81e8e] <==
	I0908 13:56:30.902704       1 serving.go:386] Generated self-signed cert in-memory
	I0908 13:56:32.688467       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:56:32.688504       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:56:32.695636       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:56:32.695812       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:56:32.696284       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:56:32.695780       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0908 13:56:32.696377       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0908 13:56:32.695826       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:56:32.697310       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:56:32.695839       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:56:32.797336       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:56:32.798229       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:56:32.799001       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0908 13:56:57.093172       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0908 13:56:57.096276       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0908 13:56:57.096344       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0908 13:56:57.096361       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0908 13:56:57.096392       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0908 13:56:57.096413       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:56:57.096715       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0908 13:56:57.104215       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7878d0468509f2f6243b9c94beb58b5b8af90871bdb04c3e3b5b2374e16be42f] <==
	I0908 13:57:15.552257       1 serving.go:386] Generated self-signed cert in-memory
	W0908 13:57:17.008315       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0908 13:57:17.008424       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0908 13:57:17.008463       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0908 13:57:17.008495       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0908 13:57:17.114373       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0908 13:57:17.114416       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 13:57:17.120272       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0908 13:57:17.120481       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0908 13:57:17.122807       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:57:17.128155       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0908 13:57:17.228849       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 14:17:12 functional-026028 kubelet[4431]: E0908 14:17:12.759338    4431 manager.go:1116] Failed to create existing container: /docker/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/crio-2acff923b73b120749d521a926ac1bd1a497e6cf485c99dfbb1e41e39640bfbb: Error finding container 2acff923b73b120749d521a926ac1bd1a497e6cf485c99dfbb1e41e39640bfbb: Status 404 returned error can't find the container with id 2acff923b73b120749d521a926ac1bd1a497e6cf485c99dfbb1e41e39640bfbb
	Sep 08 14:17:12 functional-026028 kubelet[4431]: E0908 14:17:12.759515    4431 manager.go:1116] Failed to create existing container: /crio-93bd1427e2d1c889b75b609834449e61a0bf938a5188b25d6bae196ffab54cd4: Error finding container 93bd1427e2d1c889b75b609834449e61a0bf938a5188b25d6bae196ffab54cd4: Status 404 returned error can't find the container with id 93bd1427e2d1c889b75b609834449e61a0bf938a5188b25d6bae196ffab54cd4
	Sep 08 14:17:12 functional-026028 kubelet[4431]: E0908 14:17:12.759698    4431 manager.go:1116] Failed to create existing container: /crio-2acff923b73b120749d521a926ac1bd1a497e6cf485c99dfbb1e41e39640bfbb: Error finding container 2acff923b73b120749d521a926ac1bd1a497e6cf485c99dfbb1e41e39640bfbb: Status 404 returned error can't find the container with id 2acff923b73b120749d521a926ac1bd1a497e6cf485c99dfbb1e41e39640bfbb
	Sep 08 14:17:12 functional-026028 kubelet[4431]: E0908 14:17:12.760021    4431 manager.go:1116] Failed to create existing container: /docker/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/crio-93bd1427e2d1c889b75b609834449e61a0bf938a5188b25d6bae196ffab54cd4: Error finding container 93bd1427e2d1c889b75b609834449e61a0bf938a5188b25d6bae196ffab54cd4: Status 404 returned error can't find the container with id 93bd1427e2d1c889b75b609834449e61a0bf938a5188b25d6bae196ffab54cd4
	Sep 08 14:17:12 functional-026028 kubelet[4431]: E0908 14:17:12.760273    4431 manager.go:1116] Failed to create existing container: /docker/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/crio-6914497ef7bdc58cc7010a3d1c3c0d05d5d08399e7171b59fcf76b1fbb292f7a: Error finding container 6914497ef7bdc58cc7010a3d1c3c0d05d5d08399e7171b59fcf76b1fbb292f7a: Status 404 returned error can't find the container with id 6914497ef7bdc58cc7010a3d1c3c0d05d5d08399e7171b59fcf76b1fbb292f7a
	Sep 08 14:17:12 functional-026028 kubelet[4431]: E0908 14:17:12.999642    4431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757341032999259848 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199132} inodes_used:{value:104}}"
	Sep 08 14:17:12 functional-026028 kubelet[4431]: E0908 14:17:12.999687    4431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757341032999259848 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199132} inodes_used:{value:104}}"
	Sep 08 14:17:20 functional-026028 kubelet[4431]: E0908 14:17:20.548395    4431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-fbgrs" podUID="8d0c6609-0747-425d-874a-a737755f4424"
	Sep 08 14:17:22 functional-026028 kubelet[4431]: E0908 14:17:22.549630    4431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="00d9819c-e0ee-4e58-8d4c-4a34fe5535a8"
	Sep 08 14:17:22 functional-026028 kubelet[4431]: E0908 14:17:22.550775    4431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="145a8cf5-c7f6-4a97-9ff5-93c85b1d6a9d"
	Sep 08 14:17:23 functional-026028 kubelet[4431]: E0908 14:17:23.001853    4431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757341043001487528 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199132} inodes_used:{value:104}}"
	Sep 08 14:17:23 functional-026028 kubelet[4431]: E0908 14:17:23.001891    4431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757341043001487528 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199132} inodes_used:{value:104}}"
	Sep 08 14:17:23 functional-026028 kubelet[4431]: E0908 14:17:23.548659    4431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-rx9tp" podUID="a807f3e6-59c4-4197-94f3-ddb7114a9248"
	Sep 08 14:17:24 functional-026028 kubelet[4431]: E0908 14:17:24.548004    4431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-wrmb2" podUID="27b67c7a-2ad1-4d4b-ad53-b3c487f31383"
	Sep 08 14:17:31 functional-026028 kubelet[4431]: E0908 14:17:31.548122    4431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-fbgrs" podUID="8d0c6609-0747-425d-874a-a737755f4424"
	Sep 08 14:17:33 functional-026028 kubelet[4431]: E0908 14:17:33.003870    4431 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757341053003493606 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199132} inodes_used:{value:104}}"
	Sep 08 14:17:33 functional-026028 kubelet[4431]: E0908 14:17:33.003918    4431 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757341053003493606 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:199132} inodes_used:{value:104}}"
	Sep 08 14:17:35 functional-026028 kubelet[4431]: E0908 14:17:35.548434    4431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-wrmb2" podUID="27b67c7a-2ad1-4d4b-ad53-b3c487f31383"
	Sep 08 14:17:35 functional-026028 kubelet[4431]: E0908 14:17:35.548465    4431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="00d9819c-e0ee-4e58-8d4c-4a34fe5535a8"
	Sep 08 14:17:35 functional-026028 kubelet[4431]: E0908 14:17:35.549819    4431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="145a8cf5-c7f6-4a97-9ff5-93c85b1d6a9d"
	Sep 08 14:17:36 functional-026028 kubelet[4431]: E0908 14:17:36.549486    4431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-rx9tp" podUID="a807f3e6-59c4-4197-94f3-ddb7114a9248"
	Sep 08 14:17:38 functional-026028 kubelet[4431]: E0908 14:17:38.518124    4431 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 14:17:38 functional-026028 kubelet[4431]: E0908 14:17:38.518187    4431 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 08 14:17:38 functional-026028 kubelet[4431]: E0908 14:17:38.518276    4431 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-j67cq_kubernetes-dashboard(146b4623-9d92-4797-b679-6767f23c81ad): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 08 14:17:38 functional-026028 kubelet[4431]: E0908 14:17:38.518310    4431 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j67cq" podUID="146b4623-9d92-4797-b679-6767f23c81ad"
	
	
	==> storage-provisioner [706359fe9744254bb8d278c0f92761e98704af102983827e9c249defa2ffeec7] <==
	W0908 14:17:17.123091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:19.126618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:19.131475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:21.134345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:21.138717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:23.142313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:23.146778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:25.150221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:25.156959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:27.160832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:27.166265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:29.169020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:29.173566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:31.176894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:31.182504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:33.185933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:33.192666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:35.195837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:35.200356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:37.203119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:37.209974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:39.213451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:39.220114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:41.223754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 14:17:41.229521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [aad81915237f90e6cc7806c61411617012cbaaac91cbc230d1c5bd903d3952bd] <==
	I0908 13:56:28.513200       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0908 13:56:31.910766       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0908 13:56:31.910824       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0908 13:56:31.925833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:35.392663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:39.654565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:43.252799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:46.306913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:49.328944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:49.333786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 13:56:49.334003       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0908 13:56:49.334233       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-026028_558c3e4e-2d15-4e46-bda4-6003a6957829!
	I0908 13:56:49.334885       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7fea1ad-520d-45af-87b7-0c03b059a136", APIVersion:"v1", ResourceVersion:"522", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-026028_558c3e4e-2d15-4e46-bda4-6003a6957829 became leader
	W0908 13:56:49.337265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:49.347092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0908 13:56:49.434599       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-026028_558c3e4e-2d15-4e46-bda4-6003a6957829!
	W0908 13:56:51.351542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:51.357534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:53.366523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:53.373015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:55.375739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 13:56:55.381120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-026028 -n functional-026028
helpers_test.go:269: (dbg) Run:  kubectl --context functional-026028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-fbgrs hello-node-connect-7d85dfc575-wrmb2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-rx9tp kubernetes-dashboard-855c9754f9-j67cq
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-026028 describe pod busybox-mount hello-node-75c85bcc94-fbgrs hello-node-connect-7d85dfc575-wrmb2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-rx9tp kubernetes-dashboard-855c9754f9-j67cq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-026028 describe pod busybox-mount hello-node-75c85bcc94-fbgrs hello-node-connect-7d85dfc575-wrmb2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-rx9tp kubernetes-dashboard-855c9754f9-j67cq: exit status 1 (127.235379ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:57:54 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://052077fb4ef70bf367730bdfcbbfe77c0d360f0e477b5029b178e9a2596e3175
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 13:58:16 +0000
	      Finished:     Mon, 08 Sep 2025 13:58:16 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w4l88 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-w4l88:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  19m   default-scheduler  Successfully assigned default/busybox-mount to functional-026028
	  Normal  Pulling    19m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     19m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.408s (21.518s including waiting). Image size: 3774172 bytes.
	  Normal  Created    19m   kubelet            Created container: mount-munger
	  Normal  Started    19m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-fbgrs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 14:03:37 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dqfv6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dqfv6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fbgrs to functional-026028
	  Normal   Pulling    10m (x5 over 14m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     10m (x5 over 13m)     kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     10m (x5 over 13m)     kubelet            Error: ErrImagePull
	  Normal   BackOff    4m1s (x39 over 13m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3m24s (x42 over 13m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-wrmb2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 14:02:33 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6px9r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6px9r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wrmb2 to functional-026028
	  Normal   Pulling    11m (x5 over 15m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     11m (x5 over 15m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     11m (x5 over 15m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    7s (x57 over 15m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     7s (x57 over 15m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:57:42 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xs5dx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xs5dx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  20m                   default-scheduler  Successfully assigned default/nginx-svc to functional-026028
	  Warning  Failed     19m                   kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     18m                   kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    14m (x5 over 20m)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     13m (x5 over 19m)     kubelet            Error: ErrImagePull
	  Warning  Failed     13m (x3 over 17m)     kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4m22s (x44 over 19m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4m22s (x44 over 19m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:58:29 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxwjm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-fxwjm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  19m                   default-scheduler  Successfully assigned default/sp-pod to functional-026028
	  Warning  Failed     12m (x5 over 17m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x3 over 16m)     kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    9m12s (x6 over 19m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m38s (x27 over 17m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m1s (x44 over 17m)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m5s (x3 over 17m)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-rx9tp" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-j67cq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-026028 describe pod busybox-mount hello-node-75c85bcc94-fbgrs hello-node-connect-7d85dfc575-wrmb2 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-rx9tp kubernetes-dashboard-855c9754f9-j67cq: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-026028 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-026028 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-wrmb2" [27b67c7a-2ad1-4d4b-ad53-b3c487f31383] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
I0908 14:02:46.795793  707279 retry.go:31] will retry after 18.891048585s: Temporary Error: Get "http:": http: no Host in request URL
I0908 14:03:05.687542  707279 retry.go:31] will retry after 31.804816148s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-026028 -n functional-026028
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-08 14:12:33.87040983 +0000 UTC m=+1952.145661756
functional_test.go:1645: (dbg) Run:  kubectl --context functional-026028 describe po hello-node-connect-7d85dfc575-wrmb2 -n default
functional_test.go:1645: (dbg) kubectl --context functional-026028 describe po hello-node-connect-7d85dfc575-wrmb2 -n default:
Name:             hello-node-connect-7d85dfc575-wrmb2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-026028/192.168.49.2
Start Time:       Mon, 08 Sep 2025 14:02:33 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6px9r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6px9r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wrmb2 to functional-026028
Normal   Pulling    6m30s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m30s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m30s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x19 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m22s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-026028 logs hello-node-connect-7d85dfc575-wrmb2 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-026028 logs hello-node-connect-7d85dfc575-wrmb2 -n default: exit status 1 (104.213643ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-wrmb2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-026028 logs hello-node-connect-7d85dfc575-wrmb2 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-026028 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-wrmb2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-026028/192.168.49.2
Start Time:       Mon, 08 Sep 2025 14:02:33 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6px9r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6px9r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wrmb2 to functional-026028
Normal   Pulling    6m31s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m31s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m31s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m50s (x19 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m23s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-026028 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-026028 logs -l app=hello-node-connect: exit status 1 (94.603742ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-wrmb2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-026028 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-026028 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.181.233
IPs:                      10.99.181.233
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30534/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-026028
helpers_test.go:243: (dbg) docker inspect functional-026028:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c",
	        "Created": "2025-09-08T13:55:04.098213063Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 728663,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:55:04.161602565Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/hosts",
	        "LogPath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c-json.log",
	        "Name": "/functional-026028",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-026028:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-026028",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c",
	                "LowerDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c-init/diff:/var/lib/docker/overlay2/e952efc7b05509e28dc878e8f391bd641c7ca24b818c7ca2dd0e02dcca8c51fa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-026028",
	                "Source": "/var/lib/docker/volumes/functional-026028/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-026028",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-026028",
	                "name.minikube.sigs.k8s.io": "functional-026028",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58659b49596eea8f96710b050a5b80485b1833e758b8bc0a01895293a176d953",
	            "SandboxKey": "/var/run/docker/netns/58659b49596e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33542"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33541"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-026028": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:53:c0:f2:5a:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a8a2325d2872d7ed1a25e04b95cca0f711ee83426dbd8e9711f763c04bbc03be",
	                    "EndpointID": "89a4a8506af874b5647adc620b92b262df1cca74fe3226741a99a791d979647b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-026028",
	                        "3e4b4cd85a48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-026028 -n functional-026028
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 logs -n 25: (1.760624354s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-026028 ssh echo hello                                                                                                  │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ ssh     │ functional-026028 ssh cat /etc/hostname                                                                                           │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ mount   │ -p functional-026028 /tmp/TestFunctionalparallelMountCmdany-port368473917/001:/mount-9p --alsologtostderr -v=1                    │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ ssh     │ functional-026028 ssh -- ls -la /mount-9p                                                                                         │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ ssh     │ functional-026028 ssh cat /mount-9p/test-1757339872950952889                                                                      │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ ssh     │ functional-026028 ssh stat /mount-9p/created-by-test                                                                              │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh stat /mount-9p/created-by-pod                                                                               │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh sudo umount -f /mount-9p                                                                                    │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ mount   │ -p functional-026028 /tmp/TestFunctionalparallelMountCmdspecific-port1667031619/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh -- ls -la /mount-9p                                                                                         │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh sudo umount -f /mount-9p                                                                                    │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ mount   │ -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount1 --alsologtostderr -v=1                │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount1                                                                                          │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ mount   │ -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount2 --alsologtostderr -v=1                │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ mount   │ -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount3 --alsologtostderr -v=1                │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount1                                                                                          │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh findmnt -T /mount2                                                                                          │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh findmnt -T /mount3                                                                                          │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ mount   │ -p functional-026028 --kill=true                                                                                                  │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ addons  │ functional-026028 addons list                                                                                                     │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │ 08 Sep 25 14:02 UTC │
	│ addons  │ functional-026028 addons list -o json                                                                                             │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 14:02 UTC │ 08 Sep 25 14:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:56:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:56:55.544697  733418 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:56:55.544824  733418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:55.544829  733418 out.go:374] Setting ErrFile to fd 2...
	I0908 13:56:55.544833  733418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:55.545083  733418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 13:56:55.545442  733418 out.go:368] Setting JSON to false
	I0908 13:56:55.546382  733418 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13166,"bootTime":1757326650,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 13:56:55.546442  733418 start.go:140] virtualization:  
	I0908 13:56:55.550179  733418 out.go:179] * [functional-026028] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:56:55.553363  733418 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 13:56:55.553411  733418 notify.go:220] Checking for updates...
	I0908 13:56:55.559349  733418 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:56:55.562387  733418 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 13:56:55.565814  733418 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	I0908 13:56:55.568762  733418 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:56:55.571772  733418 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:56:55.575253  733418 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:56:55.575359  733418 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:56:55.608378  733418 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:56:55.608477  733418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:56:55.671807  733418 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 13:56:55.662108492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:56:55.671940  733418 docker.go:318] overlay module found
	I0908 13:56:55.675322  733418 out.go:179] * Using the docker driver based on existing profile
	I0908 13:56:55.678137  733418 start.go:304] selected driver: docker
	I0908 13:56:55.678146  733418 start.go:918] validating driver "docker" against &{Name:functional-026028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:56:55.678252  733418 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:56:55.678355  733418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:56:55.739089  733418 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 13:56:55.730418084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:56:55.739527  733418 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:56:55.739543  733418 cni.go:84] Creating CNI manager for ""
	I0908 13:56:55.739601  733418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:56:55.739644  733418 start.go:348] cluster config:
	{Name:functional-026028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:56:55.742885  733418 out.go:179] * Starting "functional-026028" primary control-plane node in "functional-026028" cluster
	I0908 13:56:55.745739  733418 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 13:56:55.748609  733418 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:56:55.751411  733418 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:56:55.751465  733418 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 13:56:55.751473  733418 cache.go:58] Caching tarball of preloaded images
	I0908 13:56:55.751583  733418 preload.go:172] Found /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0908 13:56:55.751593  733418 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 13:56:55.751680  733418 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:56:55.751970  733418 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/config.json ...
	I0908 13:56:55.774622  733418 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 13:56:55.774640  733418 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 13:56:55.774653  733418 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:56:55.774676  733418 start.go:360] acquireMachinesLock for functional-026028: {Name:mk69dd4ac33b0cfad9dc9fe87a36b18d814933ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:56:55.774730  733418 start.go:364] duration metric: took 38.228µs to acquireMachinesLock for "functional-026028"
	I0908 13:56:55.774750  733418 start.go:96] Skipping create...Using existing machine configuration
	I0908 13:56:55.774754  733418 fix.go:54] fixHost starting: 
	I0908 13:56:55.775018  733418 cli_runner.go:164] Run: docker container inspect functional-026028 --format={{.State.Status}}
	I0908 13:56:55.808351  733418 fix.go:112] recreateIfNeeded on functional-026028: state=Running err=<nil>
	W0908 13:56:55.808371  733418 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 13:56:55.811551  733418 out.go:252] * Updating the running docker "functional-026028" container ...
	I0908 13:56:55.811577  733418 machine.go:93] provisionDockerMachine start ...
	I0908 13:56:55.811677  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:56:55.830272  733418 main.go:141] libmachine: Using SSH client type: native
	I0908 13:56:55.830604  733418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33538 <nil> <nil>}
	I0908 13:56:55.830616  733418 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:56:55.959516  733418 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-026028
	
	I0908 13:56:55.959530  733418 ubuntu.go:182] provisioning hostname "functional-026028"
	I0908 13:56:55.959591  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:56:55.978136  733418 main.go:141] libmachine: Using SSH client type: native
	I0908 13:56:55.978428  733418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33538 <nil> <nil>}
	I0908 13:56:55.978437  733418 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-026028 && echo "functional-026028" | sudo tee /etc/hostname
	I0908 13:56:56.124991  733418 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-026028
	
	I0908 13:56:56.125062  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:56:56.143479  733418 main.go:141] libmachine: Using SSH client type: native
	I0908 13:56:56.143780  733418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33538 <nil> <nil>}
	I0908 13:56:56.143795  733418 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-026028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-026028/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-026028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:56:56.268582  733418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:56:56.268597  733418 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21504-705422/.minikube CaCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21504-705422/.minikube}
	I0908 13:56:56.268628  733418 ubuntu.go:190] setting up certificates
	I0908 13:56:56.268638  733418 provision.go:84] configureAuth start
	I0908 13:56:56.268698  733418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-026028
	I0908 13:56:56.289027  733418 provision.go:143] copyHostCerts
	I0908 13:56:56.289101  733418 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-705422/.minikube/ca.pem, removing ...
	I0908 13:56:56.289136  733418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-705422/.minikube/ca.pem
	I0908 13:56:56.289231  733418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/ca.pem (1078 bytes)
	I0908 13:56:56.289347  733418 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-705422/.minikube/cert.pem, removing ...
	I0908 13:56:56.289352  733418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-705422/.minikube/cert.pem
	I0908 13:56:56.289391  733418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/cert.pem (1123 bytes)
	I0908 13:56:56.289465  733418 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-705422/.minikube/key.pem, removing ...
	I0908 13:56:56.289469  733418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-705422/.minikube/key.pem
	I0908 13:56:56.289499  733418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/key.pem (1679 bytes)
	I0908 13:56:56.289572  733418 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem org=jenkins.functional-026028 san=[127.0.0.1 192.168.49.2 functional-026028 localhost minikube]
	I0908 13:56:56.725760  733418 provision.go:177] copyRemoteCerts
	I0908 13:56:56.725818  733418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:56:56.725856  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:56:56.744855  733418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
	I0908 13:56:56.837107  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 13:56:56.862671  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 13:56:56.887417  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 13:56:56.912829  733418 provision.go:87] duration metric: took 644.166433ms to configureAuth
	I0908 13:56:56.912847  733418 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:56:56.913044  733418 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:56:56.913148  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:56:56.932035  733418 main.go:141] libmachine: Using SSH client type: native
	I0908 13:56:56.932429  733418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33538 <nil> <nil>}
	I0908 13:56:56.932441  733418 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 13:57:02.363771  733418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 13:57:02.363785  733418 machine.go:96] duration metric: took 6.552200312s to provisionDockerMachine
	I0908 13:57:02.363795  733418 start.go:293] postStartSetup for "functional-026028" (driver="docker")
	I0908 13:57:02.363806  733418 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:57:02.363880  733418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:57:02.363918  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:57:02.382627  733418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
	I0908 13:57:02.473749  733418 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:57:02.477813  733418 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:57:02.477840  733418 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:57:02.477849  733418 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:57:02.477855  733418 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:57:02.477865  733418 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-705422/.minikube/addons for local assets ...
	I0908 13:57:02.477923  733418 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-705422/.minikube/files for local assets ...
	I0908 13:57:02.478003  733418 filesync.go:149] local asset: /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/ssl/certs/7072792.pem -> 7072792.pem in /etc/ssl/certs
	I0908 13:57:02.478086  733418 filesync.go:149] local asset: /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/test/nested/copy/707279/hosts -> hosts in /etc/test/nested/copy/707279
	I0908 13:57:02.478129  733418 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/707279
	I0908 13:57:02.487781  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/ssl/certs/7072792.pem --> /etc/ssl/certs/7072792.pem (1708 bytes)
	I0908 13:57:02.512530  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/test/nested/copy/707279/hosts --> /etc/test/nested/copy/707279/hosts (40 bytes)
	I0908 13:57:02.536743  733418 start.go:296] duration metric: took 172.933198ms for postStartSetup
	I0908 13:57:02.536825  733418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:57:02.536862  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:57:02.562168  733418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
	I0908 13:57:02.649620  733418 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:57:02.655182  733418 fix.go:56] duration metric: took 6.880420388s for fixHost
	I0908 13:57:02.655198  733418 start.go:83] releasing machines lock for "functional-026028", held for 6.880460856s
	I0908 13:57:02.655278  733418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-026028
	I0908 13:57:02.674055  733418 ssh_runner.go:195] Run: cat /version.json
	I0908 13:57:02.674102  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:57:02.674370  733418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:57:02.674430  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:57:02.707981  733418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
	I0908 13:57:02.711225  733418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
	I0908 13:57:02.795891  733418 ssh_runner.go:195] Run: systemctl --version
	I0908 13:57:02.926795  733418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 13:57:03.067776  733418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:57:03.072280  733418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:57:03.081380  733418 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:57:03.081461  733418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:57:03.091108  733418 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 13:57:03.091124  733418 start.go:495] detecting cgroup driver to use...
	I0908 13:57:03.091157  733418 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:57:03.091247  733418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 13:57:03.104507  733418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 13:57:03.116831  733418 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:57:03.116887  733418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:57:03.130567  733418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:57:03.143256  733418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:57:03.267497  733418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:57:03.389247  733418 docker.go:234] disabling docker service ...
	I0908 13:57:03.389315  733418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:57:03.403687  733418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:57:03.415514  733418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:57:03.541421  733418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:57:03.664651  733418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:57:03.676725  733418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:57:03.694708  733418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 13:57:03.694762  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.704686  733418 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 13:57:03.704765  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.714759  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.724665  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.734578  733418 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:57:03.743883  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.754276  733418 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.763994  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.774182  733418 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:57:03.782856  733418 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:57:03.791356  733418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:57:03.916068  733418 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 13:57:08.259061  733418 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.342970583s)
	I0908 13:57:08.259077  733418 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 13:57:08.259128  733418 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 13:57:08.262802  733418 start.go:563] Will wait 60s for crictl version
	I0908 13:57:08.262863  733418 ssh_runner.go:195] Run: which crictl
	I0908 13:57:08.267437  733418 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:57:08.304744  733418 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 13:57:08.304817  733418 ssh_runner.go:195] Run: crio --version
	I0908 13:57:08.345246  733418 ssh_runner.go:195] Run: crio --version
	I0908 13:57:08.390576  733418 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 13:57:08.393531  733418 cli_runner.go:164] Run: docker network inspect functional-026028 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:57:08.410641  733418 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 13:57:08.417695  733418 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0908 13:57:08.420637  733418 kubeadm.go:875] updating cluster {Name:functional-026028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:57:08.420764  733418 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:57:08.420841  733418 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:57:08.466044  733418 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 13:57:08.466056  733418 crio.go:433] Images already preloaded, skipping extraction
	I0908 13:57:08.466120  733418 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:57:08.505016  733418 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 13:57:08.505029  733418 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:57:08.505035  733418 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0908 13:57:08.505143  733418 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-026028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:57:08.505233  733418 ssh_runner.go:195] Run: crio config
	I0908 13:57:08.563551  733418 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0908 13:57:08.563611  733418 cni.go:84] Creating CNI manager for ""
	I0908 13:57:08.563617  733418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:57:08.563625  733418 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:57:08.563651  733418 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-026028 NodeName:functional-026028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:57:08.563776  733418 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-026028"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:57:08.563851  733418 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:57:08.572877  733418 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:57:08.572936  733418 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:57:08.581657  733418 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0908 13:57:08.599789  733418 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:57:08.617779  733418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0908 13:57:08.636389  733418 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:57:08.640058  733418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:57:08.759948  733418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:57:08.772882  733418 certs.go:68] Setting up /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028 for IP: 192.168.49.2
	I0908 13:57:08.772893  733418 certs.go:194] generating shared ca certs ...
	I0908 13:57:08.772908  733418 certs.go:226] acquiring lock for ca certs: {Name:mka1c81276fce9cfa47987803f0bf5dea623834c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:57:08.773048  733418 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key
	I0908 13:57:08.773089  733418 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key
	I0908 13:57:08.773095  733418 certs.go:256] generating profile certs ...
	I0908 13:57:08.773174  733418 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.key
	I0908 13:57:08.773215  733418 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/apiserver.key.a683b27f
	I0908 13:57:08.773254  733418 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/proxy-client.key
	I0908 13:57:08.773360  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/707279.pem (1338 bytes)
	W0908 13:57:08.773396  733418 certs.go:480] ignoring /home/jenkins/minikube-integration/21504-705422/.minikube/certs/707279_empty.pem, impossibly tiny 0 bytes
	I0908 13:57:08.773403  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 13:57:08.773426  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem (1078 bytes)
	I0908 13:57:08.773447  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:57:08.773468  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem (1679 bytes)
	I0908 13:57:08.773508  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/ssl/certs/7072792.pem (1708 bytes)
	I0908 13:57:08.774077  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:57:08.799148  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 13:57:08.823824  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:57:08.849134  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 13:57:08.873432  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 13:57:08.898970  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 13:57:08.923518  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:57:08.948874  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 13:57:08.973498  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/ssl/certs/7072792.pem --> /usr/share/ca-certificates/7072792.pem (1708 bytes)
	I0908 13:57:08.998174  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:57:09.025729  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/certs/707279.pem --> /usr/share/ca-certificates/707279.pem (1338 bytes)
	I0908 13:57:09.050836  733418 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:57:09.069811  733418 ssh_runner.go:195] Run: openssl version
	I0908 13:57:09.075398  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7072792.pem && ln -fs /usr/share/ca-certificates/7072792.pem /etc/ssl/certs/7072792.pem"
	I0908 13:57:09.085124  733418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7072792.pem
	I0908 13:57:09.088807  733418 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 13:54 /usr/share/ca-certificates/7072792.pem
	I0908 13:57:09.088872  733418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7072792.pem
	I0908 13:57:09.095869  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7072792.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 13:57:09.105260  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:57:09.114857  733418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:57:09.118482  733418 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:40 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:57:09.118538  733418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:57:09.126064  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:57:09.135200  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/707279.pem && ln -fs /usr/share/ca-certificates/707279.pem /etc/ssl/certs/707279.pem"
	I0908 13:57:09.144807  733418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/707279.pem
	I0908 13:57:09.148574  733418 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 13:54 /usr/share/ca-certificates/707279.pem
	I0908 13:57:09.148632  733418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/707279.pem
	I0908 13:57:09.155895  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/707279.pem /etc/ssl/certs/51391683.0"
	I0908 13:57:09.165340  733418 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:57:09.169092  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 13:57:09.176284  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 13:57:09.183716  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 13:57:09.190714  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 13:57:09.197744  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 13:57:09.204682  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 13:57:09.211723  733418 kubeadm.go:392] StartCluster: {Name:functional-026028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:57:09.211810  733418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 13:57:09.211870  733418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:57:09.250211  733418 cri.go:89] found id: "a2eb7609df02e8c01390cf4d97da95e5398239a66e09a3e5323636481ddd3508"
	I0908 13:57:09.250223  733418 cri.go:89] found id: "4671e514f80d2410cc9da6b4fd10d2e61df7be909170bcda3b86abc8d9f81e8e"
	I0908 13:57:09.250227  733418 cri.go:89] found id: "88e7510fea0d0c4274fd375cb64efbb60c3b9d907dc556400874c9ec7791a1bd"
	I0908 13:57:09.250230  733418 cri.go:89] found id: "aad81915237f90e6cc7806c61411617012cbaaac91cbc230d1c5bd903d3952bd"
	I0908 13:57:09.250232  733418 cri.go:89] found id: "b1b3488e207d50f46ada696df617b63e45f9702da68cf49562cac4d36d726359"
	I0908 13:57:09.250235  733418 cri.go:89] found id: "10e4d566558b621a213415dc8eef8a81a419a1eee264fa5e0df771160d61d18c"
	I0908 13:57:09.250237  733418 cri.go:89] found id: "683391a1836d7a3ce9c858ef82f6a1f2482bf2b8552e335a29ed0476f25e53ed"
	I0908 13:57:09.250239  733418 cri.go:89] found id: "705f0150212b231d01218ac1b3bae404294054472077925f98a204294722c0ec"
	I0908 13:57:09.250241  733418 cri.go:89] found id: "7155a7bbd536bdb08c3dc9cae323ba24d8050800a300c8bfc3328d655479f40d"
	I0908 13:57:09.250252  733418 cri.go:89] found id: "b5274351c2ae9582ea3afd262ee2a16d73f826044315a9ed6e318b6e5523f4da"
	I0908 13:57:09.250254  733418 cri.go:89] found id: "191c16cb81b5dfdb6897f1913c080f2482136038353e4d5801e23e4a57215a01"
	I0908 13:57:09.250266  733418 cri.go:89] found id: "b1483ca64f95618fc6d9129b1fff8f0509458e2ac559ebb98e45599114b3ad20"
	I0908 13:57:09.250268  733418 cri.go:89] found id: "6b03c8750a13de897aff433909c63c4fbd47442ccd62e619f4363de12ec2a09e"
	I0908 13:57:09.250271  733418 cri.go:89] found id: "b0b173aca5e11d594d6386b862b13a7b44ff6b862b7483ce92b3618bfbdb09d4"
	I0908 13:57:09.250273  733418 cri.go:89] found id: "8c70d1d0be88e9e2d26229544e96d72c91aaa8dad4cd36b51d128eb1e59ebc4f"
	I0908 13:57:09.250277  733418 cri.go:89] found id: "24597b939d0264ca81e21938b94d67fe3655c6cc437c4b37a5ad11bb8ec3a2df"
	I0908 13:57:09.250279  733418 cri.go:89] found id: ""
	I0908 13:57:09.250331  733418 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-026028 -n functional-026028
helpers_test.go:269: (dbg) Run:  kubectl --context functional-026028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-fbgrs hello-node-connect-7d85dfc575-wrmb2 nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-026028 describe pod busybox-mount hello-node-75c85bcc94-fbgrs hello-node-connect-7d85dfc575-wrmb2 nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-026028 describe pod busybox-mount hello-node-75c85bcc94-fbgrs hello-node-connect-7d85dfc575-wrmb2 nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:57:54 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://052077fb4ef70bf367730bdfcbbfe77c0d360f0e477b5029b178e9a2596e3175
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 13:58:16 +0000
	      Finished:     Mon, 08 Sep 2025 13:58:16 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w4l88 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-w4l88:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  14m   default-scheduler  Successfully assigned default/busybox-mount to functional-026028
	  Normal  Pulling    14m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     14m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.408s (21.518s including waiting). Image size: 3774172 bytes.
	  Normal  Created    14m   kubelet            Created container: mount-munger
	  Normal  Started    14m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-fbgrs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 14:03:37 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dqfv6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dqfv6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m                      default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fbgrs to functional-026028
	  Normal   Pulling    5m28s (x5 over 8m59s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     5m12s (x5 over 8m26s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     5m12s (x5 over 8m26s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m26s (x19 over 8m26s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m3s (x21 over 8m26s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-wrmb2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 14:02:33 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6px9r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6px9r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wrmb2 to functional-026028
	  Normal   Pulling    6m34s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m34s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m34s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x39 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3s (x39 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:57:42 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xs5dx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xs5dx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/nginx-svc to functional-026028
	  Warning  Failed     14m                   kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     13m                   kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    9m19s (x5 over 14m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     8m26s (x5 over 14m)   kubelet            Error: ErrImagePull
	  Warning  Failed     8m26s (x3 over 12m)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4m46s (x23 over 14m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4m18s (x25 over 14m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:58:29 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxwjm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-fxwjm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/sp-pod to functional-026028
	  Warning  Failed     10m (x2 over 12m)     kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m55s (x5 over 12m)   kubelet            Error: ErrImagePull
	  Warning  Failed     6m55s (x3 over 11m)   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4m33s (x21 over 12m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Normal   Pulling    4m7s (x6 over 14m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m33s (x27 over 12m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.73s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (249.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [92afb581-45e3-437d-93d5-3a92f98912a0] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003686916s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-026028 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-026028 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-026028 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-026028 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [00d9819c-e0ee-4e58-8d4c-4a34fe5535a8] Pending
helpers_test.go:352: "sp-pod" [00d9819c-e0ee-4e58-8d4c-4a34fe5535a8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0908 13:58:37.817444  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:37.823845  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:37.835213  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:37.856589  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:37.897980  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:37.979711  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:38.141170  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:38.462970  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:39.105162  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:40.387348  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:42.949493  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:48.071051  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:58:58.313438  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:59:18.795739  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:59:59.757129  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:01:21.678449  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-026028 -n functional-026028
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-08 14:02:30.335348517 +0000 UTC m=+1348.610600410
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-026028 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-026028 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-026028/192.168.49.2
Start Time:       Mon, 08 Sep 2025 13:58:29 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxwjm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-fxwjm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m1s                 default-scheduler  Successfully assigned default/sp-pod to functional-026028
Warning  Failed     62s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    49s (x2 over 2m32s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     49s (x2 over 2m32s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    35s (x3 over 4m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2s (x2 over 2m33s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2s (x3 over 2m33s)   kubelet            Error: ErrImagePull
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-026028 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-026028 logs sp-pod -n default: exit status 1 (91.189816ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-026028 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-026028
helpers_test.go:243: (dbg) docker inspect functional-026028:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c",
	        "Created": "2025-09-08T13:55:04.098213063Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 728663,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T13:55:04.161602565Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/hosts",
	        "LogPath": "/var/lib/docker/containers/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c/3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c-json.log",
	        "Name": "/functional-026028",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-026028:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-026028",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3e4b4cd85a48c72d96e7a8788c4df776db6f248ca07d0fef51ee5e6f7b06a02c",
	                "LowerDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c-init/diff:/var/lib/docker/overlay2/e952efc7b05509e28dc878e8f391bd641c7ca24b818c7ca2dd0e02dcca8c51fa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72dfbe18950b3a050cfaa1db037db96c60d4224dc4400a4ff46b6b91b71f006c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-026028",
	                "Source": "/var/lib/docker/volumes/functional-026028/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-026028",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-026028",
	                "name.minikube.sigs.k8s.io": "functional-026028",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58659b49596eea8f96710b050a5b80485b1833e758b8bc0a01895293a176d953",
	            "SandboxKey": "/var/run/docker/netns/58659b49596e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33542"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33540"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33541"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-026028": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:53:c0:f2:5a:c6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a8a2325d2872d7ed1a25e04b95cca0f711ee83426dbd8e9711f763c04bbc03be",
	                    "EndpointID": "89a4a8506af874b5647adc620b92b262df1cca74fe3226741a99a791d979647b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-026028",
	                        "3e4b4cd85a48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-026028 -n functional-026028
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 logs -n 25: (1.757777306s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ functional-026028 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                         │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ ssh     │ functional-026028 ssh -n functional-026028 sudo cat /tmp/does/not/exist/cp-test.txt                                               │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ ssh     │ functional-026028 ssh echo hello                                                                                                  │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ ssh     │ functional-026028 ssh cat /etc/hostname                                                                                           │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ mount   │ -p functional-026028 /tmp/TestFunctionalparallelMountCmdany-port368473917/001:/mount-9p --alsologtostderr -v=1                    │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ ssh     │ functional-026028 ssh -- ls -la /mount-9p                                                                                         │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ ssh     │ functional-026028 ssh cat /mount-9p/test-1757339872950952889                                                                      │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:57 UTC │ 08 Sep 25 13:57 UTC │
	│ ssh     │ functional-026028 ssh stat /mount-9p/created-by-test                                                                              │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh stat /mount-9p/created-by-pod                                                                               │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh sudo umount -f /mount-9p                                                                                    │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ mount   │ -p functional-026028 /tmp/TestFunctionalparallelMountCmdspecific-port1667031619/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh -- ls -la /mount-9p                                                                                         │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh sudo umount -f /mount-9p                                                                                    │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ mount   │ -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount1 --alsologtostderr -v=1                │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount1                                                                                          │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ mount   │ -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount2 --alsologtostderr -v=1                │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ mount   │ -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount3 --alsologtostderr -v=1                │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	│ ssh     │ functional-026028 ssh findmnt -T /mount1                                                                                          │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh findmnt -T /mount2                                                                                          │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ ssh     │ functional-026028 ssh findmnt -T /mount3                                                                                          │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │ 08 Sep 25 13:58 UTC │
	│ mount   │ -p functional-026028 --kill=true                                                                                                  │ functional-026028 │ jenkins │ v1.36.0 │ 08 Sep 25 13:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:56:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:56:55.544697  733418 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:56:55.544824  733418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:55.544829  733418 out.go:374] Setting ErrFile to fd 2...
	I0908 13:56:55.544833  733418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:56:55.545083  733418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 13:56:55.545442  733418 out.go:368] Setting JSON to false
	I0908 13:56:55.546382  733418 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13166,"bootTime":1757326650,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 13:56:55.546442  733418 start.go:140] virtualization:  
	I0908 13:56:55.550179  733418 out.go:179] * [functional-026028] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:56:55.553363  733418 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 13:56:55.553411  733418 notify.go:220] Checking for updates...
	I0908 13:56:55.559349  733418 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:56:55.562387  733418 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 13:56:55.565814  733418 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	I0908 13:56:55.568762  733418 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:56:55.571772  733418 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:56:55.575253  733418 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:56:55.575359  733418 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:56:55.608378  733418 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:56:55.608477  733418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:56:55.671807  733418 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 13:56:55.662108492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:56:55.671940  733418 docker.go:318] overlay module found
	I0908 13:56:55.675322  733418 out.go:179] * Using the docker driver based on existing profile
	I0908 13:56:55.678137  733418 start.go:304] selected driver: docker
	I0908 13:56:55.678146  733418 start.go:918] validating driver "docker" against &{Name:functional-026028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:56:55.678252  733418 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:56:55.678355  733418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:56:55.739089  733418 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 13:56:55.730418084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:56:55.739527  733418 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 13:56:55.739543  733418 cni.go:84] Creating CNI manager for ""
	I0908 13:56:55.739601  733418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:56:55.739644  733418 start.go:348] cluster config:
	{Name:functional-026028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:56:55.742885  733418 out.go:179] * Starting "functional-026028" primary control-plane node in "functional-026028" cluster
	I0908 13:56:55.745739  733418 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 13:56:55.748609  733418 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:56:55.751411  733418 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:56:55.751465  733418 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 13:56:55.751473  733418 cache.go:58] Caching tarball of preloaded images
	I0908 13:56:55.751583  733418 preload.go:172] Found /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0908 13:56:55.751593  733418 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 13:56:55.751680  733418 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:56:55.751970  733418 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/config.json ...
	I0908 13:56:55.774622  733418 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 13:56:55.774640  733418 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 13:56:55.774653  733418 cache.go:232] Successfully downloaded all kic artifacts
	I0908 13:56:55.774676  733418 start.go:360] acquireMachinesLock for functional-026028: {Name:mk69dd4ac33b0cfad9dc9fe87a36b18d814933ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 13:56:55.774730  733418 start.go:364] duration metric: took 38.228µs to acquireMachinesLock for "functional-026028"
	I0908 13:56:55.774750  733418 start.go:96] Skipping create...Using existing machine configuration
	I0908 13:56:55.774754  733418 fix.go:54] fixHost starting: 
	I0908 13:56:55.775018  733418 cli_runner.go:164] Run: docker container inspect functional-026028 --format={{.State.Status}}
	I0908 13:56:55.808351  733418 fix.go:112] recreateIfNeeded on functional-026028: state=Running err=<nil>
	W0908 13:56:55.808371  733418 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 13:56:55.811551  733418 out.go:252] * Updating the running docker "functional-026028" container ...
	I0908 13:56:55.811577  733418 machine.go:93] provisionDockerMachine start ...
	I0908 13:56:55.811677  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:56:55.830272  733418 main.go:141] libmachine: Using SSH client type: native
	I0908 13:56:55.830604  733418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33538 <nil> <nil>}
	I0908 13:56:55.830616  733418 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 13:56:55.959516  733418 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-026028
	
	I0908 13:56:55.959530  733418 ubuntu.go:182] provisioning hostname "functional-026028"
	I0908 13:56:55.959591  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:56:55.978136  733418 main.go:141] libmachine: Using SSH client type: native
	I0908 13:56:55.978428  733418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33538 <nil> <nil>}
	I0908 13:56:55.978437  733418 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-026028 && echo "functional-026028" | sudo tee /etc/hostname
	I0908 13:56:56.124991  733418 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-026028
	
	I0908 13:56:56.125062  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:56:56.143479  733418 main.go:141] libmachine: Using SSH client type: native
	I0908 13:56:56.143780  733418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33538 <nil> <nil>}
	I0908 13:56:56.143795  733418 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-026028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-026028/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-026028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 13:56:56.268582  733418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 13:56:56.268597  733418 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21504-705422/.minikube CaCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21504-705422/.minikube}
	I0908 13:56:56.268628  733418 ubuntu.go:190] setting up certificates
	I0908 13:56:56.268638  733418 provision.go:84] configureAuth start
	I0908 13:56:56.268698  733418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-026028
	I0908 13:56:56.289027  733418 provision.go:143] copyHostCerts
	I0908 13:56:56.289101  733418 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-705422/.minikube/ca.pem, removing ...
	I0908 13:56:56.289136  733418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-705422/.minikube/ca.pem
	I0908 13:56:56.289231  733418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/ca.pem (1078 bytes)
	I0908 13:56:56.289347  733418 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-705422/.minikube/cert.pem, removing ...
	I0908 13:56:56.289352  733418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-705422/.minikube/cert.pem
	I0908 13:56:56.289391  733418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/cert.pem (1123 bytes)
	I0908 13:56:56.289465  733418 exec_runner.go:144] found /home/jenkins/minikube-integration/21504-705422/.minikube/key.pem, removing ...
	I0908 13:56:56.289469  733418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21504-705422/.minikube/key.pem
	I0908 13:56:56.289499  733418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21504-705422/.minikube/key.pem (1679 bytes)
	I0908 13:56:56.289572  733418 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem org=jenkins.functional-026028 san=[127.0.0.1 192.168.49.2 functional-026028 localhost minikube]
	I0908 13:56:56.725760  733418 provision.go:177] copyRemoteCerts
	I0908 13:56:56.725818  733418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 13:56:56.725856  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:56:56.744855  733418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
	I0908 13:56:56.837107  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 13:56:56.862671  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0908 13:56:56.887417  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 13:56:56.912829  733418 provision.go:87] duration metric: took 644.166433ms to configureAuth
	I0908 13:56:56.912847  733418 ubuntu.go:206] setting minikube options for container-runtime
	I0908 13:56:56.913044  733418 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:56:56.913148  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:56:56.932035  733418 main.go:141] libmachine: Using SSH client type: native
	I0908 13:56:56.932429  733418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33538 <nil> <nil>}
	I0908 13:56:56.932441  733418 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 13:57:02.363771  733418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 13:57:02.363785  733418 machine.go:96] duration metric: took 6.552200312s to provisionDockerMachine
	I0908 13:57:02.363795  733418 start.go:293] postStartSetup for "functional-026028" (driver="docker")
	I0908 13:57:02.363806  733418 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 13:57:02.363880  733418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 13:57:02.363918  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:57:02.382627  733418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
	I0908 13:57:02.473749  733418 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 13:57:02.477813  733418 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 13:57:02.477840  733418 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 13:57:02.477849  733418 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 13:57:02.477855  733418 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 13:57:02.477865  733418 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-705422/.minikube/addons for local assets ...
	I0908 13:57:02.477923  733418 filesync.go:126] Scanning /home/jenkins/minikube-integration/21504-705422/.minikube/files for local assets ...
	I0908 13:57:02.478003  733418 filesync.go:149] local asset: /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/ssl/certs/7072792.pem -> 7072792.pem in /etc/ssl/certs
	I0908 13:57:02.478086  733418 filesync.go:149] local asset: /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/test/nested/copy/707279/hosts -> hosts in /etc/test/nested/copy/707279
	I0908 13:57:02.478129  733418 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/707279
	I0908 13:57:02.487781  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/ssl/certs/7072792.pem --> /etc/ssl/certs/7072792.pem (1708 bytes)
	I0908 13:57:02.512530  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/test/nested/copy/707279/hosts --> /etc/test/nested/copy/707279/hosts (40 bytes)
	I0908 13:57:02.536743  733418 start.go:296] duration metric: took 172.933198ms for postStartSetup
	I0908 13:57:02.536825  733418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:57:02.536862  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:57:02.562168  733418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
	I0908 13:57:02.649620  733418 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 13:57:02.655182  733418 fix.go:56] duration metric: took 6.880420388s for fixHost
	I0908 13:57:02.655198  733418 start.go:83] releasing machines lock for "functional-026028", held for 6.880460856s
	I0908 13:57:02.655278  733418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-026028
	I0908 13:57:02.674055  733418 ssh_runner.go:195] Run: cat /version.json
	I0908 13:57:02.674102  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:57:02.674370  733418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 13:57:02.674430  733418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
	I0908 13:57:02.707981  733418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
	I0908 13:57:02.711225  733418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
	I0908 13:57:02.795891  733418 ssh_runner.go:195] Run: systemctl --version
	I0908 13:57:02.926795  733418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 13:57:03.067776  733418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 13:57:03.072280  733418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:57:03.081380  733418 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 13:57:03.081461  733418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 13:57:03.091108  733418 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 13:57:03.091124  733418 start.go:495] detecting cgroup driver to use...
	I0908 13:57:03.091157  733418 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 13:57:03.091247  733418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 13:57:03.104507  733418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 13:57:03.116831  733418 docker.go:218] disabling cri-docker service (if available) ...
	I0908 13:57:03.116887  733418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 13:57:03.130567  733418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 13:57:03.143256  733418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 13:57:03.267497  733418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 13:57:03.389247  733418 docker.go:234] disabling docker service ...
	I0908 13:57:03.389315  733418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 13:57:03.403687  733418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 13:57:03.415514  733418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 13:57:03.541421  733418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 13:57:03.664651  733418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 13:57:03.676725  733418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 13:57:03.694708  733418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 13:57:03.694762  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.704686  733418 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 13:57:03.704765  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.714759  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.724665  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.734578  733418 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 13:57:03.743883  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.754276  733418 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.763994  733418 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 13:57:03.774182  733418 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 13:57:03.782856  733418 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 13:57:03.791356  733418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:57:03.916068  733418 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 13:57:08.259061  733418 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.342970583s)
	I0908 13:57:08.259077  733418 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 13:57:08.259128  733418 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 13:57:08.262802  733418 start.go:563] Will wait 60s for crictl version
	I0908 13:57:08.262863  733418 ssh_runner.go:195] Run: which crictl
	I0908 13:57:08.267437  733418 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 13:57:08.304744  733418 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 13:57:08.304817  733418 ssh_runner.go:195] Run: crio --version
	I0908 13:57:08.345246  733418 ssh_runner.go:195] Run: crio --version
	I0908 13:57:08.390576  733418 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 13:57:08.393531  733418 cli_runner.go:164] Run: docker network inspect functional-026028 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 13:57:08.410641  733418 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 13:57:08.417695  733418 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0908 13:57:08.420637  733418 kubeadm.go:875] updating cluster {Name:functional-026028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 13:57:08.420764  733418 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 13:57:08.420841  733418 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:57:08.466044  733418 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 13:57:08.466056  733418 crio.go:433] Images already preloaded, skipping extraction
	I0908 13:57:08.466120  733418 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 13:57:08.505016  733418 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 13:57:08.505029  733418 cache_images.go:85] Images are preloaded, skipping loading
	I0908 13:57:08.505035  733418 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0908 13:57:08.505143  733418 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-026028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 13:57:08.505233  733418 ssh_runner.go:195] Run: crio config
	I0908 13:57:08.563551  733418 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0908 13:57:08.563611  733418 cni.go:84] Creating CNI manager for ""
	I0908 13:57:08.563617  733418 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:57:08.563625  733418 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 13:57:08.563651  733418 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-026028 NodeName:functional-026028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 13:57:08.563776  733418 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-026028"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 13:57:08.563851  733418 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 13:57:08.572877  733418 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 13:57:08.572936  733418 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 13:57:08.581657  733418 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0908 13:57:08.599789  733418 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 13:57:08.617779  733418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0908 13:57:08.636389  733418 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 13:57:08.640058  733418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 13:57:08.759948  733418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 13:57:08.772882  733418 certs.go:68] Setting up /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028 for IP: 192.168.49.2
	I0908 13:57:08.772893  733418 certs.go:194] generating shared ca certs ...
	I0908 13:57:08.772908  733418 certs.go:226] acquiring lock for ca certs: {Name:mka1c81276fce9cfa47987803f0bf5dea623834c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:57:08.773048  733418 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key
	I0908 13:57:08.773089  733418 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key
	I0908 13:57:08.773095  733418 certs.go:256] generating profile certs ...
	I0908 13:57:08.773174  733418 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.key
	I0908 13:57:08.773215  733418 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/apiserver.key.a683b27f
	I0908 13:57:08.773254  733418 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/proxy-client.key
	I0908 13:57:08.773360  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/707279.pem (1338 bytes)
	W0908 13:57:08.773396  733418 certs.go:480] ignoring /home/jenkins/minikube-integration/21504-705422/.minikube/certs/707279_empty.pem, impossibly tiny 0 bytes
	I0908 13:57:08.773403  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 13:57:08.773426  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/ca.pem (1078 bytes)
	I0908 13:57:08.773447  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/cert.pem (1123 bytes)
	I0908 13:57:08.773468  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/certs/key.pem (1679 bytes)
	I0908 13:57:08.773508  733418 certs.go:484] found cert: /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/ssl/certs/7072792.pem (1708 bytes)
	I0908 13:57:08.774077  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 13:57:08.799148  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 13:57:08.823824  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 13:57:08.849134  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 13:57:08.873432  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 13:57:08.898970  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 13:57:08.923518  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 13:57:08.948874  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 13:57:08.973498  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/ssl/certs/7072792.pem --> /usr/share/ca-certificates/7072792.pem (1708 bytes)
	I0908 13:57:08.998174  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 13:57:09.025729  733418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21504-705422/.minikube/certs/707279.pem --> /usr/share/ca-certificates/707279.pem (1338 bytes)
	I0908 13:57:09.050836  733418 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 13:57:09.069811  733418 ssh_runner.go:195] Run: openssl version
	I0908 13:57:09.075398  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7072792.pem && ln -fs /usr/share/ca-certificates/7072792.pem /etc/ssl/certs/7072792.pem"
	I0908 13:57:09.085124  733418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7072792.pem
	I0908 13:57:09.088807  733418 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 13:54 /usr/share/ca-certificates/7072792.pem
	I0908 13:57:09.088872  733418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7072792.pem
	I0908 13:57:09.095869  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7072792.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 13:57:09.105260  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 13:57:09.114857  733418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:57:09.118482  733418 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 13:40 /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:57:09.118538  733418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 13:57:09.126064  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 13:57:09.135200  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/707279.pem && ln -fs /usr/share/ca-certificates/707279.pem /etc/ssl/certs/707279.pem"
	I0908 13:57:09.144807  733418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/707279.pem
	I0908 13:57:09.148574  733418 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 13:54 /usr/share/ca-certificates/707279.pem
	I0908 13:57:09.148632  733418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/707279.pem
	I0908 13:57:09.155895  733418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/707279.pem /etc/ssl/certs/51391683.0"
	I0908 13:57:09.165340  733418 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 13:57:09.169092  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 13:57:09.176284  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 13:57:09.183716  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 13:57:09.190714  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 13:57:09.197744  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 13:57:09.204682  733418 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 13:57:09.211723  733418 kubeadm.go:392] StartCluster: {Name:functional-026028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:57:09.211810  733418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 13:57:09.211870  733418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 13:57:09.250211  733418 cri.go:89] found id: "a2eb7609df02e8c01390cf4d97da95e5398239a66e09a3e5323636481ddd3508"
	I0908 13:57:09.250223  733418 cri.go:89] found id: "4671e514f80d2410cc9da6b4fd10d2e61df7be909170bcda3b86abc8d9f81e8e"
	I0908 13:57:09.250227  733418 cri.go:89] found id: "88e7510fea0d0c4274fd375cb64efbb60c3b9d907dc556400874c9ec7791a1bd"
	I0908 13:57:09.250230  733418 cri.go:89] found id: "aad81915237f90e6cc7806c61411617012cbaaac91cbc230d1c5bd903d3952bd"
	I0908 13:57:09.250232  733418 cri.go:89] found id: "b1b3488e207d50f46ada696df617b63e45f9702da68cf49562cac4d36d726359"
	I0908 13:57:09.250235  733418 cri.go:89] found id: "10e4d566558b621a213415dc8eef8a81a419a1eee264fa5e0df771160d61d18c"
	I0908 13:57:09.250237  733418 cri.go:89] found id: "683391a1836d7a3ce9c858ef82f6a1f2482bf2b8552e335a29ed0476f25e53ed"
	I0908 13:57:09.250239  733418 cri.go:89] found id: "705f0150212b231d01218ac1b3bae404294054472077925f98a204294722c0ec"
	I0908 13:57:09.250241  733418 cri.go:89] found id: "7155a7bbd536bdb08c3dc9cae323ba24d8050800a300c8bfc3328d655479f40d"
	I0908 13:57:09.250252  733418 cri.go:89] found id: "b5274351c2ae9582ea3afd262ee2a16d73f826044315a9ed6e318b6e5523f4da"
	I0908 13:57:09.250254  733418 cri.go:89] found id: "191c16cb81b5dfdb6897f1913c080f2482136038353e4d5801e23e4a57215a01"
	I0908 13:57:09.250266  733418 cri.go:89] found id: "b1483ca64f95618fc6d9129b1fff8f0509458e2ac559ebb98e45599114b3ad20"
	I0908 13:57:09.250268  733418 cri.go:89] found id: "6b03c8750a13de897aff433909c63c4fbd47442ccd62e619f4363de12ec2a09e"
	I0908 13:57:09.250271  733418 cri.go:89] found id: "b0b173aca5e11d594d6386b862b13a7b44ff6b862b7483ce92b3618bfbdb09d4"
	I0908 13:57:09.250273  733418 cri.go:89] found id: "8c70d1d0be88e9e2d26229544e96d72c91aaa8dad4cd36b51d128eb1e59ebc4f"
	I0908 13:57:09.250277  733418 cri.go:89] found id: "24597b939d0264ca81e21938b94d67fe3655c6cc437c4b37a5ad11bb8ec3a2df"
	I0908 13:57:09.250279  733418 cri.go:89] found id: ""
	I0908 13:57:09.250331  733418 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-026028 -n functional-026028
helpers_test.go:269: (dbg) Run:  kubectl --context functional-026028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-026028 describe pod busybox-mount nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-026028 describe pod busybox-mount nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:57:54 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://052077fb4ef70bf367730bdfcbbfe77c0d360f0e477b5029b178e9a2596e3175
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Sep 2025 13:58:16 +0000
	      Finished:     Mon, 08 Sep 2025 13:58:16 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w4l88 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-w4l88:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  4m39s  default-scheduler  Successfully assigned default/busybox-mount to functional-026028
	  Normal  Pulling    4m38s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m17s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.408s (21.518s including waiting). Image size: 3774172 bytes.
	  Normal  Created    4m17s  kubelet            Created container: mount-munger
	  Normal  Started    4m17s  kubelet            Started container mount-munger
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:57:42 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xs5dx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xs5dx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m51s                default-scheduler  Successfully assigned default/nginx-svc to functional-026028
	  Warning  Failed     4m20s                kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m6s                 kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    84s (x4 over 4m51s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     35s (x4 over 4m20s)  kubelet            Error: ErrImagePull
	  Warning  Failed     35s (x2 over 2m6s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    11s (x6 over 4m20s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     11s (x6 over 4m20s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-026028/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 13:58:29 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxwjm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-fxwjm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m4s                 default-scheduler  Successfully assigned default/sp-pod to functional-026028
	  Warning  Failed     65s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    52s (x2 over 2m35s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     52s (x2 over 2m35s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    38s (x3 over 4m3s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     5s (x2 over 2m36s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5s (x3 over 2m36s)   kubelet            Error: ErrImagePull

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (249.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (241.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-026028 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [145a8cf5-c7f6-4a97-9ff5-93c85b1d6a9d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-026028 -n functional-026028
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-09-08 14:01:42.599988327 +0000 UTC m=+1300.875240220
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-026028 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-026028 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-026028/192.168.49.2
Start Time:       Mon, 08 Sep 2025 13:57:42 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xs5dx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xs5dx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-026028
Warning  Failed     3m29s                kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m15s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     75s (x3 over 3m29s)  kubelet            Error: ErrImagePull
Warning  Failed     75s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    46s (x4 over 3m29s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     46s (x4 over 3m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    33s (x4 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-026028 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-026028 logs nginx-svc -n default: exit status 1 (137.7461ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-026028 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (241.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (114.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0908 14:01:42.830014  707279 retry.go:31] will retry after 2.510403643s: Temporary Error: Get "http:": http: no Host in request URL
I0908 14:01:45.341435  707279 retry.go:31] will retry after 3.126019788s: Temporary Error: Get "http:": http: no Host in request URL
I0908 14:01:48.467885  707279 retry.go:31] will retry after 6.461528793s: Temporary Error: Get "http:": http: no Host in request URL
I0908 14:01:54.930286  707279 retry.go:31] will retry after 13.275330608s: Temporary Error: Get "http:": http: no Host in request URL
I0908 14:02:08.205851  707279 retry.go:31] will retry after 12.040972604s: Temporary Error: Get "http:": http: no Host in request URL
I0908 14:02:20.247344  707279 retry.go:31] will retry after 26.548249122s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-026028 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx-svc   LoadBalancer   10.103.122.170   10.103.122.170   80:30835/TCP   5m55s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (114.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-026028 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-026028 expose deployment hello-node --type=NodePort --port=8080
E0908 14:03:37.816831  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-fbgrs" [8d0c6609-0747-425d-874a-a737755f4424] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0908 14:04:05.520274  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:08:37.816589  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-026028 -n functional-026028
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-08 14:13:38.224025331 +0000 UTC m=+2016.499277224
functional_test.go:1460: (dbg) Run:  kubectl --context functional-026028 describe po hello-node-75c85bcc94-fbgrs -n default
functional_test.go:1460: (dbg) kubectl --context functional-026028 describe po hello-node-75c85bcc94-fbgrs -n default:
Name:             hello-node-75c85bcc94-fbgrs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-026028/192.168.49.2
Start Time:       Mon, 08 Sep 2025 14:03:37 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dqfv6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dqfv6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fbgrs to functional-026028
Normal   Pulling    6m29s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m13s (x5 over 9m27s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m13s (x5 over 9m27s)   kubelet            Error: ErrImagePull
Warning  Failed     4m27s (x19 over 9m27s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m4s (x21 over 9m27s)   kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-026028 logs hello-node-75c85bcc94-fbgrs -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-026028 logs hello-node-75c85bcc94-fbgrs -n default: exit status 1 (81.517246ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-fbgrs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-026028 logs hello-node-75c85bcc94-fbgrs -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 service --namespace=default --https --url hello-node: exit status 115 (383.939556ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31279
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-026028 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 service hello-node --url --format={{.IP}}: exit status 115 (399.474497ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-026028 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 service hello-node --url: exit status 115 (430.799702ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31279
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-026028 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31279
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    

Test pass (288/331)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.32
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 4.98
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.09
18 TestDownloadOnly/v1.34.0/DeleteAll 0.22
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 199.33
31 TestAddons/serial/GCPAuth/Namespaces 0.25
32 TestAddons/serial/GCPAuth/FakeCredentials 11.89
35 TestAddons/parallel/Registry 40.28
36 TestAddons/parallel/RegistryCreds 0.71
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 6.86
42 TestAddons/parallel/Headlamp 47.82
43 TestAddons/parallel/CloudSpanner 5.9
44 TestAddons/parallel/LocalPath 51.29
45 TestAddons/parallel/NvidiaDevicePlugin 6.6
46 TestAddons/parallel/Yakd 11.77
48 TestAddons/StoppedEnableDisable 12.2
49 TestCertOptions 41.32
50 TestCertExpiration 255.08
52 TestForceSystemdFlag 40.46
53 TestForceSystemdEnv 37.87
59 TestErrorSpam/setup 33.01
60 TestErrorSpam/start 0.81
61 TestErrorSpam/status 1.12
62 TestErrorSpam/pause 1.74
63 TestErrorSpam/unpause 1.91
64 TestErrorSpam/stop 1.48
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 79.7
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 29.09
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.03
76 TestFunctional/serial/CacheCmd/cache/add_local 1.38
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.04
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
84 TestFunctional/serial/ExtraConfig 36.93
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.78
87 TestFunctional/serial/LogsFileCmd 1.76
88 TestFunctional/serial/InvalidService 4.68
90 TestFunctional/parallel/ConfigCmd 0.42
92 TestFunctional/parallel/DryRun 0.48
93 TestFunctional/parallel/InternationalLanguage 0.21
94 TestFunctional/parallel/StatusCmd 1
99 TestFunctional/parallel/AddonsCmd 0.15
102 TestFunctional/parallel/SSHCmd 0.54
103 TestFunctional/parallel/CpCmd 1.62
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.65
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.83
114 TestFunctional/parallel/License 0.32
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
117 TestFunctional/parallel/Version/short 0.08
118 TestFunctional/parallel/Version/components 1.18
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
126 TestFunctional/parallel/ImageCommands/ImageBuild 3.94
127 TestFunctional/parallel/ImageCommands/Setup 0.7
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.28
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.84
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
138 TestFunctional/parallel/MountCmd/any-port 26.72
139 TestFunctional/parallel/MountCmd/specific-port 1.87
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.93
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
148 TestFunctional/parallel/ProfileCmd/profile_list 0.42
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
150 TestFunctional/parallel/ServiceCmd/List 1.3
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.3
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 169.53
163 TestMultiControlPlane/serial/DeployApp 9.54
164 TestMultiControlPlane/serial/PingHostFromPods 1.83
165 TestMultiControlPlane/serial/AddWorkerNode 60.65
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1
168 TestMultiControlPlane/serial/CopyFile 19.4
169 TestMultiControlPlane/serial/StopSecondaryNode 12.75
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
171 TestMultiControlPlane/serial/RestartSecondaryNode 29.67
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.26
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 141.6
174 TestMultiControlPlane/serial/DeleteSecondaryNode 13.11
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
176 TestMultiControlPlane/serial/StopCluster 35.74
177 TestMultiControlPlane/serial/RestartCluster 90.69
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
179 TestMultiControlPlane/serial/AddSecondaryNode 77.85
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.02
184 TestJSONOutput/start/Command 83.95
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.75
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.67
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.82
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.24
209 TestKicCustomNetwork/create_custom_network 40.44
210 TestKicCustomNetwork/use_default_bridge_network 38.6
211 TestKicExistingNetwork 33.66
212 TestKicCustomSubnet 36.36
213 TestKicStaticIP 37.49
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 71.62
218 TestMountStart/serial/StartWithMountFirst 6.81
219 TestMountStart/serial/VerifyMountFirst 0.26
220 TestMountStart/serial/StartWithMountSecond 6.34
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.64
223 TestMountStart/serial/VerifyMountPostDelete 0.25
224 TestMountStart/serial/Stop 1.2
225 TestMountStart/serial/RestartStopped 7.54
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 136.37
230 TestMultiNode/serial/DeployApp2Nodes 6.68
231 TestMultiNode/serial/PingHostFrom2Pods 1.07
232 TestMultiNode/serial/AddNode 56.3
233 TestMultiNode/serial/MultiNodeLabels 0.1
234 TestMultiNode/serial/ProfileList 0.68
235 TestMultiNode/serial/CopyFile 10.19
236 TestMultiNode/serial/StopNode 2.29
237 TestMultiNode/serial/StartAfterStop 8.21
238 TestMultiNode/serial/RestartKeepsNodes 74.9
239 TestMultiNode/serial/DeleteNode 5.42
240 TestMultiNode/serial/StopMultiNode 23.82
241 TestMultiNode/serial/RestartMultiNode 53.24
242 TestMultiNode/serial/ValidateNameConflict 35.06
247 TestPreload 105.3
249 TestScheduledStopUnix 109.16
252 TestInsufficientStorage 10.52
253 TestRunningBinaryUpgrade 60.75
255 TestKubernetesUpgrade 196.59
256 TestMissingContainerUpgrade 127.59
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 51.17
260 TestNoKubernetes/serial/StartWithStopK8s 12.28
261 TestNoKubernetes/serial/Start 9.19
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
263 TestNoKubernetes/serial/ProfileList 0.65
264 TestNoKubernetes/serial/Stop 1.2
265 TestNoKubernetes/serial/StartNoArgs 6.69
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
267 TestStoppedBinaryUpgrade/Setup 1.82
268 TestStoppedBinaryUpgrade/Upgrade 62.43
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
278 TestPause/serial/Start 86.74
286 TestNetworkPlugins/group/false 3.83
290 TestPause/serial/SecondStartNoReconfiguration 27.9
291 TestPause/serial/Pause 1.03
292 TestPause/serial/VerifyStatus 0.39
293 TestPause/serial/Unpause 0.84
294 TestPause/serial/PauseAgain 1.33
295 TestPause/serial/DeletePaused 3.25
296 TestPause/serial/VerifyDeletedResources 0.52
298 TestStartStop/group/old-k8s-version/serial/FirstStart 61.28
299 TestStartStop/group/old-k8s-version/serial/DeployApp 10.45
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.13
301 TestStartStop/group/old-k8s-version/serial/Stop 11.93
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
303 TestStartStop/group/old-k8s-version/serial/SecondStart 48.51
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
307 TestStartStop/group/old-k8s-version/serial/Pause 3.19
309 TestStartStop/group/no-preload/serial/FirstStart 69.87
311 TestStartStop/group/embed-certs/serial/FirstStart 81.7
312 TestStartStop/group/no-preload/serial/DeployApp 11.5
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.75
314 TestStartStop/group/no-preload/serial/Stop 12.23
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
316 TestStartStop/group/no-preload/serial/SecondStart 49.88
317 TestStartStop/group/embed-certs/serial/DeployApp 10.35
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
320 TestStartStop/group/embed-certs/serial/Stop 12.05
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
323 TestStartStop/group/no-preload/serial/Pause 3.19
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.35
325 TestStartStop/group/embed-certs/serial/SecondStart 57.47
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.08
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
331 TestStartStop/group/embed-certs/serial/Pause 3.12
333 TestStartStop/group/newest-cni/serial/FirstStart 41.29
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.5
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.74
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.3
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.37
338 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.2
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
341 TestStartStop/group/newest-cni/serial/Stop 1.23
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
343 TestStartStop/group/newest-cni/serial/SecondStart 22.42
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
347 TestStartStop/group/newest-cni/serial/Pause 3.89
348 TestNetworkPlugins/group/auto/Start 81.29
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
350 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
351 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
352 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.87
353 TestNetworkPlugins/group/kindnet/Start 83.35
354 TestNetworkPlugins/group/auto/KubeletFlags 0.31
355 TestNetworkPlugins/group/auto/NetCatPod 11.29
356 TestNetworkPlugins/group/auto/DNS 0.2
357 TestNetworkPlugins/group/auto/Localhost 0.17
358 TestNetworkPlugins/group/auto/HairPin 0.16
359 TestNetworkPlugins/group/calico/Start 66.14
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
362 TestNetworkPlugins/group/kindnet/NetCatPod 12.37
363 TestNetworkPlugins/group/kindnet/DNS 0.29
364 TestNetworkPlugins/group/kindnet/Localhost 0.22
365 TestNetworkPlugins/group/kindnet/HairPin 0.17
366 TestNetworkPlugins/group/custom-flannel/Start 71.54
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.37
369 TestNetworkPlugins/group/calico/NetCatPod 13.35
370 TestNetworkPlugins/group/calico/DNS 0.34
371 TestNetworkPlugins/group/calico/Localhost 0.23
372 TestNetworkPlugins/group/calico/HairPin 0.29
373 TestNetworkPlugins/group/enable-default-cni/Start 83.09
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.33
376 TestNetworkPlugins/group/custom-flannel/DNS 0.26
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
379 TestNetworkPlugins/group/flannel/Start 109.02
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.32
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
385 TestNetworkPlugins/group/bridge/Start 47.15
386 TestNetworkPlugins/group/flannel/ControllerPod 6
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
388 TestNetworkPlugins/group/bridge/NetCatPod 10.27
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
390 TestNetworkPlugins/group/flannel/NetCatPod 10.26
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.16
394 TestNetworkPlugins/group/flannel/DNS 0.18
395 TestNetworkPlugins/group/flannel/Localhost 0.15
396 TestNetworkPlugins/group/flannel/HairPin 0.15
x
+
TestDownloadOnly/v1.28.0/json-events (8.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-903641 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-903641 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.324303704s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 13:40:10.091944  707279 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0908 13:40:10.092034  707279 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-903641
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-903641: exit status 85 (93.017779ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-903641 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-903641 │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:40:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:40:01.813641  707284 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:40:01.813830  707284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:40:01.813840  707284 out.go:374] Setting ErrFile to fd 2...
	I0908 13:40:01.813844  707284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:40:01.814115  707284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	W0908 13:40:01.814303  707284 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21504-705422/.minikube/config/config.json: open /home/jenkins/minikube-integration/21504-705422/.minikube/config/config.json: no such file or directory
	I0908 13:40:01.814744  707284 out.go:368] Setting JSON to true
	I0908 13:40:01.815639  707284 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12152,"bootTime":1757326650,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 13:40:01.815721  707284 start.go:140] virtualization:  
	I0908 13:40:01.819912  707284 out.go:99] [download-only-903641] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	W0908 13:40:01.820154  707284 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 13:40:01.820286  707284 notify.go:220] Checking for updates...
	I0908 13:40:01.823219  707284 out.go:171] MINIKUBE_LOCATION=21504
	I0908 13:40:01.826414  707284 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:40:01.829557  707284 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 13:40:01.832566  707284 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	I0908 13:40:01.835673  707284 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 13:40:01.841686  707284 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:40:01.842084  707284 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:40:01.870025  707284 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:40:01.870149  707284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:40:01.928267  707284 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 13:40:01.918678793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:40:01.928381  707284 docker.go:318] overlay module found
	I0908 13:40:01.931255  707284 out.go:99] Using the docker driver based on user configuration
	I0908 13:40:01.931295  707284 start.go:304] selected driver: docker
	I0908 13:40:01.931305  707284 start.go:918] validating driver "docker" against <nil>
	I0908 13:40:01.931416  707284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:40:01.987171  707284 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 13:40:01.977434393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:40:01.987338  707284 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:40:01.987623  707284 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 13:40:01.987785  707284 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:40:01.990885  707284 out.go:171] Using Docker driver with root privileges
	I0908 13:40:01.993702  707284 cni.go:84] Creating CNI manager for ""
	I0908 13:40:01.993784  707284 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 13:40:01.993799  707284 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 13:40:01.993884  707284 start.go:348] cluster config:
	{Name:download-only-903641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-903641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 13:40:01.996907  707284 out.go:99] Starting "download-only-903641" primary control-plane node in "download-only-903641" cluster
	I0908 13:40:01.996947  707284 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 13:40:01.999775  707284 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 13:40:01.999806  707284 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 13:40:01.999977  707284 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 13:40:02.020355  707284 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:40:02.021256  707284 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 13:40:02.021401  707284 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 13:40:02.059258  707284 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0908 13:40:02.059283  707284 cache.go:58] Caching tarball of preloaded images
	I0908 13:40:02.059462  707284 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 13:40:02.062735  707284 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 13:40:02.062775  707284 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 13:40:02.156003  707284 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0908 13:40:05.355494  707284 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 13:40:05.355702  707284 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 13:40:06.301230  707284 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0908 13:40:06.301611  707284 profile.go:143] Saving config to /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/download-only-903641/config.json ...
	I0908 13:40:06.301646  707284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/download-only-903641/config.json: {Name:mk68f978b5b0146ab67c680015e3bacea05b56d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 13:40:06.302461  707284 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 13:40:06.302680  707284 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21504-705422/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-903641 host does not exist
	  To start a cluster, run: "minikube start -p download-only-903641"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-903641
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-445998 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-445998 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.984292715s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 13:40:15.521089  707279 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0908 13:40:15.521130  707279 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21504-705422/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-445998
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-445998: exit status 85 (89.949905ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-903641 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-903641 │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ delete  │ -p download-only-903641                                                                                                                                                   │ download-only-903641 │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │ 08 Sep 25 13:40 UTC │
	│ start   │ -o=json --download-only -p download-only-445998 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-445998 │ jenkins │ v1.36.0 │ 08 Sep 25 13:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 13:40:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 13:40:10.586591  707484 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:40:10.586789  707484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:40:10.586817  707484 out.go:374] Setting ErrFile to fd 2...
	I0908 13:40:10.586836  707484 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:40:10.587120  707484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 13:40:10.587581  707484 out.go:368] Setting JSON to true
	I0908 13:40:10.588497  707484 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12161,"bootTime":1757326650,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 13:40:10.588596  707484 start.go:140] virtualization:  
	I0908 13:40:10.592039  707484 out.go:99] [download-only-445998] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:40:10.592337  707484 notify.go:220] Checking for updates...
	I0908 13:40:10.595273  707484 out.go:171] MINIKUBE_LOCATION=21504
	I0908 13:40:10.598592  707484 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:40:10.601622  707484 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 13:40:10.604539  707484 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	I0908 13:40:10.607517  707484 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 13:40:10.613342  707484 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 13:40:10.613641  707484 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:40:10.640275  707484 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:40:10.640393  707484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:40:10.698314  707484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 13:40:10.689197582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:40:10.698433  707484 docker.go:318] overlay module found
	I0908 13:40:10.701485  707484 out.go:99] Using the docker driver based on user configuration
	I0908 13:40:10.701529  707484 start.go:304] selected driver: docker
	I0908 13:40:10.701541  707484 start.go:918] validating driver "docker" against <nil>
	I0908 13:40:10.701651  707484 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:40:10.751923  707484 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 13:40:10.742836114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:40:10.752117  707484 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 13:40:10.752426  707484 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 13:40:10.752596  707484 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 13:40:10.755818  707484 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-445998 host does not exist
	  To start a cluster, run: "minikube start -p download-only-445998"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-445998
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 13:40:16.887451  707279 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-025043 --alsologtostderr --binary-mirror http://127.0.0.1:39871 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-025043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-025043
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-242829
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-242829: exit status 85 (74.714419ms)

                                                
                                                
-- stdout --
	* Profile "addons-242829" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-242829"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-242829
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-242829: exit status 85 (74.950845ms)

                                                
                                                
-- stdout --
	* Profile "addons-242829" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-242829"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (199.33s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-242829 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-242829 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m19.327513269s)
--- PASS: TestAddons/Setup (199.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-242829 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-242829 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-242829 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-242829 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [38cb340e-c6aa-4bee-b2dd-408b6a45ad50] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [38cb340e-c6aa-4bee-b2dd-408b6a45ad50] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003924608s
addons_test.go:694: (dbg) Run:  kubectl --context addons-242829 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-242829 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-242829 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-242829 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (40.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 16.342041ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-rsbjh" [f14de69e-d212-4a48-81f3-6e0b493a2d62] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003579376s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-vctb4" [ae615f1b-8f69-4eb7-9497-285649bc6650] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003217038s
addons_test.go:392: (dbg) Run:  kubectl --context addons-242829 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-242829 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-242829 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (27.248227468s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 ip
2025/09/08 13:44:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (40.28s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.579324ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-242829
addons_test.go:332: (dbg) Run:  kubectl --context addons-242829 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-kldqv" [52d29402-2637-4587-8f70-f313af71f2be] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003200398s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.958444ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-hhpn4" [72b3cfa9-428a-4c02-add5-147e52588ae7] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004956806s
addons_test.go:463: (dbg) Run:  kubectl --context addons-242829 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (47.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-242829 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-2548z" [49ab1120-a2cd-45ce-9b57-135d9695671b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-2548z" [49ab1120-a2cd-45ce-9b57-135d9695671b] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 41.002862751s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-242829 addons disable headlamp --alsologtostderr -v=1: (5.841605891s)
--- PASS: TestAddons/parallel/Headlamp (47.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-t6r8c" [483836e8-cb62-4099-bd70-02f73072fa47] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005469494s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.90s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-242829 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-242829 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-242829 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [db40bac5-1fc1-42a3-b52d-592c83e64264] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [db40bac5-1fc1-42a3-b52d-592c83e64264] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [db40bac5-1fc1-42a3-b52d-592c83e64264] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004209605s
addons_test.go:967: (dbg) Run:  kubectl --context addons-242829 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 ssh "cat /opt/local-path-provisioner/pvc-2fff7e83-b238-4284-875d-267426c27321_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-242829 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-242829 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-242829 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.949383671s)
--- PASS: TestAddons/parallel/LocalPath (51.29s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-g8fsg" [f115ce75-ec51-411e-a28c-6f0ad7f20cd9] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003947669s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-p2t9z" [a5fbf356-3131-4836-8427-2abeb46d93e4] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00313853s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-242829 addons disable yakd --alsologtostderr -v=1: (5.7624923s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-242829
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-242829: (11.916062506s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-242829
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-242829
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-242829
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (41.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-765959 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0908 14:52:41.754481  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-765959 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (38.618070936s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-765959 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-765959 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-765959 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-765959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-765959
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-765959: (2.019506443s)
--- PASS: TestCertOptions (41.32s)

                                                
                                    
x
+
TestCertExpiration (255.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-846472 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-846472 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.879246416s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-846472 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-846472 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (30.214834191s)
helpers_test.go:175: Cleaning up "cert-expiration-846472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-846472
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-846472: (2.988995882s)
--- PASS: TestCertExpiration (255.08s)

                                                
                                    
x
+
TestForceSystemdFlag (40.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-667318 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-667318 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.698908749s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-667318 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-667318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-667318
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-667318: (2.461085021s)
--- PASS: TestForceSystemdFlag (40.46s)

                                                
                                    
x
+
TestForceSystemdEnv (37.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-268673 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-268673 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.347086324s)
helpers_test.go:175: Cleaning up "force-systemd-env-268673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-268673
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-268673: (2.518205341s)
--- PASS: TestForceSystemdEnv (37.87s)

                                                
                                    
x
+
TestErrorSpam/setup (33.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-903492 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-903492 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-903492 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-903492 --driver=docker  --container-runtime=crio: (33.014331947s)
--- PASS: TestErrorSpam/setup (33.01s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 pause
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 stop: (1.265853217s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-903492 --log_dir /tmp/nospam-903492 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21504-705422/.minikube/files/etc/test/nested/copy/707279/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-026028 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-026028 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.694749856s)
--- PASS: TestFunctional/serial/StartWithProxy (79.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.09s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 13:56:17.957174  707279 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-026028 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-026028 --alsologtostderr -v=8: (29.088737012s)
functional_test.go:678: soft start took 29.08922259s for "functional-026028" cluster.
I0908 13:56:47.046190  707279 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (29.09s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-026028 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 cache add registry.k8s.io/pause:3.1: (1.393635149s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 cache add registry.k8s.io/pause:3.3: (1.354050996s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 cache add registry.k8s.io/pause:latest: (1.281990127s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-026028 /tmp/TestFunctionalserialCacheCmdcacheadd_local2653886872/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 cache add minikube-local-cache-test:functional-026028
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 cache delete minikube-local-cache-test:functional-026028
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-026028
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.214841ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 cache reload: (1.12378417s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 kubectl -- --context functional-026028 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-026028 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-026028 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-026028 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.932919954s)
functional_test.go:776: restart took 36.933023865s for "functional-026028" cluster.
I0908 13:57:32.431496  707279 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (36.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-026028 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 logs: (1.780254423s)
--- PASS: TestFunctional/serial/LogsCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 logs --file /tmp/TestFunctionalserialLogsFileCmd582892802/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 logs --file /tmp/TestFunctionalserialLogsFileCmd582892802/001/logs.txt: (1.763152657s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-026028 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-026028
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-026028: exit status 115 (728.10319ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32435 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-026028 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 config get cpus: exit status 14 (64.432667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 config get cpus: exit status 14 (69.04528ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-026028 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-026028 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (220.779117ms)

                                                
                                                
-- stdout --
	* [functional-026028] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:12:38.368825  741121 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:12:38.369002  741121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:12:38.369024  741121 out.go:374] Setting ErrFile to fd 2...
	I0908 14:12:38.369041  741121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:12:38.369341  741121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 14:12:38.369752  741121 out.go:368] Setting JSON to false
	I0908 14:12:38.370715  741121 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14109,"bootTime":1757326650,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 14:12:38.370814  741121 start.go:140] virtualization:  
	I0908 14:12:38.374185  741121 out.go:179] * [functional-026028] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 14:12:38.377295  741121 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 14:12:38.377362  741121 notify.go:220] Checking for updates...
	I0908 14:12:38.383121  741121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:12:38.386039  741121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 14:12:38.388863  741121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	I0908 14:12:38.391735  741121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 14:12:38.394566  741121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:12:38.397965  741121 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:12:38.398604  741121 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:12:38.426878  741121 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 14:12:38.427007  741121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:12:38.516248  741121 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 14:12:38.505322489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:12:38.516360  741121 docker.go:318] overlay module found
	I0908 14:12:38.519535  741121 out.go:179] * Using the docker driver based on existing profile
	I0908 14:12:38.522342  741121 start.go:304] selected driver: docker
	I0908 14:12:38.522364  741121 start.go:918] validating driver "docker" against &{Name:functional-026028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:12:38.522475  741121 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:12:38.526068  741121 out.go:203] 
	W0908 14:12:38.528854  741121 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 14:12:38.531819  741121 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-026028 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-026028 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-026028 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (212.204882ms)

                                                
                                                
-- stdout --
	* [functional-026028] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:12:39.846773  741443 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:12:39.846950  741443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:12:39.846978  741443 out.go:374] Setting ErrFile to fd 2...
	I0908 14:12:39.847000  741443 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:12:39.847392  741443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 14:12:39.847830  741443 out.go:368] Setting JSON to false
	I0908 14:12:39.848774  741443 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14110,"bootTime":1757326650,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 14:12:39.848880  741443 start.go:140] virtualization:  
	I0908 14:12:39.852385  741443 out.go:179] * [functional-026028] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0908 14:12:39.855412  741443 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 14:12:39.855476  741443 notify.go:220] Checking for updates...
	I0908 14:12:39.861486  741443 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:12:39.864356  741443 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 14:12:39.867174  741443 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	I0908 14:12:39.870045  741443 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 14:12:39.872894  741443 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:12:39.876327  741443 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:12:39.876874  741443 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:12:39.911043  741443 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 14:12:39.911154  741443 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:12:39.979555  741443 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 14:12:39.970082784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:12:39.979673  741443 docker.go:318] overlay module found
	I0908 14:12:39.982865  741443 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 14:12:39.985643  741443 start.go:304] selected driver: docker
	I0908 14:12:39.985666  741443 start.go:918] validating driver "docker" against &{Name:functional-026028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-026028 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 14:12:39.985781  741443 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:12:39.989423  741443 out.go:203] 
	W0908 14:12:39.992309  741443 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 14:12:39.995128  741443 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh -n functional-026028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 cp functional-026028:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2448746361/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh -n functional-026028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh -n functional-026028 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/707279/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "sudo cat /etc/test/nested/copy/707279/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/707279.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "sudo cat /etc/ssl/certs/707279.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/707279.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "sudo cat /usr/share/ca-certificates/707279.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7072792.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "sudo cat /etc/ssl/certs/7072792.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7072792.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "sudo cat /usr/share/ca-certificates/7072792.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-026028 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 ssh "sudo systemctl is-active docker": exit status 1 (456.537208ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 ssh "sudo systemctl is-active containerd": exit status 1 (377.826974ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-026028 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-026028 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-026028 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-026028 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 735657: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 version -o=json --components: (1.179068261s)
--- PASS: TestFunctional/parallel/Version/components (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-026028 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-026028 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-026028
localhost/kicbase/echo-server:functional-026028
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-026028 image ls --format short --alsologtostderr:
I0908 14:13:46.726779  742625 out.go:360] Setting OutFile to fd 1 ...
I0908 14:13:46.726932  742625 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:13:46.726958  742625 out.go:374] Setting ErrFile to fd 2...
I0908 14:13:46.726977  742625 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:13:46.727263  742625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
I0908 14:13:46.727946  742625 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:13:46.728143  742625 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:13:46.728661  742625 cli_runner.go:164] Run: docker container inspect functional-026028 --format={{.State.Status}}
I0908 14:13:46.747171  742625 ssh_runner.go:195] Run: systemctl --version
I0908 14:13:46.747359  742625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
I0908 14:13:46.765437  742625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
I0908 14:13:46.860731  742625 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-026028 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ localhost/my-image                      │ functional-026028  │ 32fc9f48dd154 │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ a25f5ef9c34c3 │ 51.6MB │
│ localhost/minikube-local-cache-test     │ functional-026028  │ dbfbd16e147c5 │ 3.33kB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ d291939e99406 │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ localhost/kicbase/echo-server           │ functional-026028  │ ce2d2cda2d858 │ 4.79MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ 996be7e86d9b3 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ 6fc32d66c1411 │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-026028 image ls --format table --alsologtostderr:
I0908 14:13:51.369266  742975 out.go:360] Setting OutFile to fd 1 ...
I0908 14:13:51.369405  742975 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:13:51.369417  742975 out.go:374] Setting ErrFile to fd 2...
I0908 14:13:51.369422  742975 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:13:51.369685  742975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
I0908 14:13:51.370324  742975 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:13:51.370451  742975 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:13:51.370930  742975 cli_runner.go:164] Run: docker container inspect functional-026028 --format={{.State.Status}}
I0908 14:13:51.389064  742975 ssh_runner.go:195] Run: systemctl --version
I0908 14:13:51.389122  742975 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
I0908 14:13:51.410819  742975 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
I0908 14:13:51.500709  742975 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-026028 image ls --format json --alsologtostderr:
[{"id":"9ba2491e661e0ddca52702fce582bc1e58953ae1a41fd579b03467246ea6d5b6","repoDigests":["docker.io/library/16253d60d21af0281089d93bcaae7524c3f0d42da74508b5926f3f318ee9e4e7-tmp@sha256:897810d5be785d6799e197284dc7542753ca4c8632d2d8489ae0763e8cbdc794"],"repoTags":[],"size":"1637644"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-026028"],"size":"4788229"},{"id":"dbfbd16e147c5395abff1eba6a5d35f9308b6221f0120cefd1bc47f2e34dbb4a","repoDigests":["localhost/minikube-local-cache-test@sha256:354d4ceb9e7fb2399e379818a994ff4897f479be097b666988db452eb29f179c"],"repoTags":["localhost/minikube-local-cache-test:functional-026028"],"size":"3330"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3
a53a7d90ea1286f","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"84818927"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io
/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"72629077"},{"id":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"51592021"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[
"registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"32fc9f48dd154c6856af6bd8402f554277216dce23106221f0ce5358c7437498","repoDigests":["localhost/my-image@sha256:5c356e62ee3390ca4418bf47ae647640f1b7e3d6bb2c14eab8f9a917e9d47287"],"repoTags":["localhost/my-image:functional-026028"],"size":"1640226"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334
b88076500e883e"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"75938711"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5
ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-026028 image ls --format json --alsologtostderr:
I0908 14:13:51.131428  742942 out.go:360] Setting OutFile to fd 1 ...
I0908 14:13:51.131602  742942 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:13:51.131618  742942 out.go:374] Setting ErrFile to fd 2...
I0908 14:13:51.131623  742942 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:13:51.131905  742942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
I0908 14:13:51.132620  742942 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:13:51.132749  742942 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:13:51.133215  742942 cli_runner.go:164] Run: docker container inspect functional-026028 --format={{.State.Status}}
I0908 14:13:51.151545  742942 ssh_runner.go:195] Run: systemctl --version
I0908 14:13:51.151616  742942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
I0908 14:13:51.171309  742942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
I0908 14:13:51.264685  742942 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-026028 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "51592021"
- id: dbfbd16e147c5395abff1eba6a5d35f9308b6221f0120cefd1bc47f2e34dbb4a
repoDigests:
- localhost/minikube-local-cache-test@sha256:354d4ceb9e7fb2399e379818a994ff4897f479be097b666988db452eb29f179c
repoTags:
- localhost/minikube-local-cache-test:functional-026028
size: "3330"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-026028
size: "4788229"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "84818927"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "75938711"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-026028 image ls --format yaml --alsologtostderr:
I0908 14:13:46.958680  742656 out.go:360] Setting OutFile to fd 1 ...
I0908 14:13:46.958855  742656 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:13:46.958885  742656 out.go:374] Setting ErrFile to fd 2...
I0908 14:13:46.958905  742656 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:13:46.959189  742656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
I0908 14:13:46.959880  742656 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:13:46.960048  742656 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:13:46.960533  742656 cli_runner.go:164] Run: docker container inspect functional-026028 --format={{.State.Status}}
I0908 14:13:46.978500  742656 ssh_runner.go:195] Run: systemctl --version
I0908 14:13:46.978553  742656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
I0908 14:13:46.997160  742656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
I0908 14:13:47.084678  742656 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 ssh pgrep buildkitd: exit status 1 (272.792457ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image build -t localhost/my-image:functional-026028 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 image build -t localhost/my-image:functional-026028 testdata/build --alsologtostderr: (3.432809542s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-026028 image build -t localhost/my-image:functional-026028 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9ba2491e661
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-026028
--> 32fc9f48dd1
Successfully tagged localhost/my-image:functional-026028
32fc9f48dd154c6856af6bd8402f554277216dce23106221f0ce5358c7437498
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-026028 image build -t localhost/my-image:functional-026028 testdata/build --alsologtostderr:
I0908 14:13:47.454318  742747 out.go:360] Setting OutFile to fd 1 ...
I0908 14:13:47.455069  742747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:13:47.455109  742747 out.go:374] Setting ErrFile to fd 2...
I0908 14:13:47.455135  742747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 14:13:47.455440  742747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
I0908 14:13:47.456179  742747 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:13:47.456909  742747 config.go:182] Loaded profile config "functional-026028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 14:13:47.457405  742747 cli_runner.go:164] Run: docker container inspect functional-026028 --format={{.State.Status}}
I0908 14:13:47.479427  742747 ssh_runner.go:195] Run: systemctl --version
I0908 14:13:47.479483  742747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-026028
I0908 14:13:47.501789  742747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/functional-026028/id_rsa Username:docker}
I0908 14:13:47.592714  742747 build_images.go:161] Building image from path: /tmp/build.4168638136.tar
I0908 14:13:47.592785  742747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 14:13:47.602324  742747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4168638136.tar
I0908 14:13:47.605888  742747 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4168638136.tar: stat -c "%s %y" /var/lib/minikube/build/build.4168638136.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4168638136.tar': No such file or directory
I0908 14:13:47.605920  742747 ssh_runner.go:362] scp /tmp/build.4168638136.tar --> /var/lib/minikube/build/build.4168638136.tar (3072 bytes)
I0908 14:13:47.632060  742747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4168638136
I0908 14:13:47.641104  742747 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4168638136 -xf /var/lib/minikube/build/build.4168638136.tar
I0908 14:13:47.650726  742747 crio.go:315] Building image: /var/lib/minikube/build/build.4168638136
I0908 14:13:47.650816  742747 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-026028 /var/lib/minikube/build/build.4168638136 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0908 14:13:50.810580  742747 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-026028 /var/lib/minikube/build/build.4168638136 --cgroup-manager=cgroupfs: (3.159739378s)
I0908 14:13:50.810646  742747 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4168638136
I0908 14:13:50.819336  742747 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4168638136.tar
I0908 14:13:50.827982  742747 build_images.go:217] Built localhost/my-image:functional-026028 from /tmp/build.4168638136.tar
I0908 14:13:50.828020  742747 build_images.go:133] succeeded building to: functional-026028
I0908 14:13:50.828025  742747 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-026028
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image load --daemon kicbase/echo-server:functional-026028 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 image load --daemon kicbase/echo-server:functional-026028 --alsologtostderr: (1.125807957s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image load --daemon kicbase/echo-server:functional-026028 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-026028
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image load --daemon kicbase/echo-server:functional-026028 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image save kicbase/echo-server:functional-026028 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image rm kicbase/echo-server:functional-026028 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-026028
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 image save --daemon kicbase/echo-server:functional-026028 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-026028
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 update-context --alsologtostderr -v=2
E0908 14:15:00.882136  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (26.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-026028 /tmp/TestFunctionalparallelMountCmdany-port368473917/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757339872950952889" to /tmp/TestFunctionalparallelMountCmdany-port368473917/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757339872950952889" to /tmp/TestFunctionalparallelMountCmdany-port368473917/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757339872950952889" to /tmp/TestFunctionalparallelMountCmdany-port368473917/001/test-1757339872950952889
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (339.788633ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:57:53.291046  707279 retry.go:31] will retry after 413.35901ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 13:57 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 13:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 13:57 test-1757339872950952889
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh cat /mount-9p/test-1757339872950952889
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-026028 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [4bddfbbf-8d3f-4f28-98b9-6734b28b8959] Pending
helpers_test.go:352: "busybox-mount" [4bddfbbf-8d3f-4f28-98b9-6734b28b8959] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [4bddfbbf-8d3f-4f28-98b9-6734b28b8959] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [4bddfbbf-8d3f-4f28-98b9-6734b28b8959] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 24.004209595s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-026028 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-026028 /tmp/TestFunctionalparallelMountCmdany-port368473917/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (26.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-026028 /tmp/TestFunctionalparallelMountCmdspecific-port1667031619/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (349.76053ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:58:20.021674  707279 retry.go:31] will retry after 490.657623ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-026028 /tmp/TestFunctionalparallelMountCmdspecific-port1667031619/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 ssh "sudo umount -f /mount-9p": exit status 1 (274.486188ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-026028 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-026028 /tmp/TestFunctionalparallelMountCmdspecific-port1667031619/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-026028 ssh "findmnt -T" /mount1: exit status 1 (587.927944ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 13:58:22.134935  707279 retry.go:31] will retry after 463.975932ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-026028 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-026028 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1773629588/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-026028 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "345.72245ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "75.319554ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "357.26695ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "59.942433ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 service list: (1.297087499s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-026028 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-026028 service list -o json: (1.295579704s)
functional_test.go:1504: Took "1.295653001s" to run "out/minikube-linux-arm64 -p functional-026028 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.30s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-026028
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-026028
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-026028
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (169.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0908 14:18:37.816248  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m48.697982304s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (169.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 kubectl -- rollout status deployment/busybox: (6.227735247s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-2ggrx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-rvtmv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-x4jg9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-2ggrx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-rvtmv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-x4jg9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-2ggrx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-rvtmv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-x4jg9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-2ggrx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-2ggrx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-rvtmv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-rvtmv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-x4jg9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 kubectl -- exec busybox-7b57f96db7-x4jg9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 node add --alsologtostderr -v 5: (59.636211803s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5: (1.009211691s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-000694 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp testdata/cp-test.txt ha-000694:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile41418781/001/cp-test_ha-000694.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694:/home/docker/cp-test.txt ha-000694-m02:/home/docker/cp-test_ha-000694_ha-000694-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m02 "sudo cat /home/docker/cp-test_ha-000694_ha-000694-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694:/home/docker/cp-test.txt ha-000694-m03:/home/docker/cp-test_ha-000694_ha-000694-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m03 "sudo cat /home/docker/cp-test_ha-000694_ha-000694-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694:/home/docker/cp-test.txt ha-000694-m04:/home/docker/cp-test_ha-000694_ha-000694-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m04 "sudo cat /home/docker/cp-test_ha-000694_ha-000694-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp testdata/cp-test.txt ha-000694-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile41418781/001/cp-test_ha-000694-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m02:/home/docker/cp-test.txt ha-000694:/home/docker/cp-test_ha-000694-m02_ha-000694.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694 "sudo cat /home/docker/cp-test_ha-000694-m02_ha-000694.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m02:/home/docker/cp-test.txt ha-000694-m03:/home/docker/cp-test_ha-000694-m02_ha-000694-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m03 "sudo cat /home/docker/cp-test_ha-000694-m02_ha-000694-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m02:/home/docker/cp-test.txt ha-000694-m04:/home/docker/cp-test_ha-000694-m02_ha-000694-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m04 "sudo cat /home/docker/cp-test_ha-000694-m02_ha-000694-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp testdata/cp-test.txt ha-000694-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile41418781/001/cp-test_ha-000694-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m03:/home/docker/cp-test.txt ha-000694:/home/docker/cp-test_ha-000694-m03_ha-000694.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694 "sudo cat /home/docker/cp-test_ha-000694-m03_ha-000694.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m03:/home/docker/cp-test.txt ha-000694-m02:/home/docker/cp-test_ha-000694-m03_ha-000694-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m02 "sudo cat /home/docker/cp-test_ha-000694-m03_ha-000694-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m03:/home/docker/cp-test.txt ha-000694-m04:/home/docker/cp-test_ha-000694-m03_ha-000694-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m04 "sudo cat /home/docker/cp-test_ha-000694-m03_ha-000694-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp testdata/cp-test.txt ha-000694-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile41418781/001/cp-test_ha-000694-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m04:/home/docker/cp-test.txt ha-000694:/home/docker/cp-test_ha-000694-m04_ha-000694.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694 "sudo cat /home/docker/cp-test_ha-000694-m04_ha-000694.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m04:/home/docker/cp-test.txt ha-000694-m02:/home/docker/cp-test_ha-000694-m04_ha-000694-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m02 "sudo cat /home/docker/cp-test_ha-000694-m04_ha-000694-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 cp ha-000694-m04:/home/docker/cp-test.txt ha-000694-m03:/home/docker/cp-test_ha-000694-m04_ha-000694-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 ssh -n ha-000694-m03 "sudo cat /home/docker/cp-test_ha-000694-m04_ha-000694-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 node stop m02 --alsologtostderr -v 5: (11.978924184s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5: exit status 7 (768.498436ms)

                                                
                                                
-- stdout --
	ha-000694
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-000694-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-000694-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-000694-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:22:19.496613  759143 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:22:19.496805  759143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:22:19.496820  759143 out.go:374] Setting ErrFile to fd 2...
	I0908 14:22:19.496826  759143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:22:19.497130  759143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 14:22:19.497377  759143 out.go:368] Setting JSON to false
	I0908 14:22:19.497436  759143 mustload.go:65] Loading cluster: ha-000694
	I0908 14:22:19.497499  759143 notify.go:220] Checking for updates...
	I0908 14:22:19.497893  759143 config.go:182] Loaded profile config "ha-000694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:22:19.497917  759143 status.go:174] checking status of ha-000694 ...
	I0908 14:22:19.498509  759143 cli_runner.go:164] Run: docker container inspect ha-000694 --format={{.State.Status}}
	I0908 14:22:19.522330  759143 status.go:371] ha-000694 host status = "Running" (err=<nil>)
	I0908 14:22:19.522354  759143 host.go:66] Checking if "ha-000694" exists ...
	I0908 14:22:19.522669  759143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-000694
	I0908 14:22:19.553685  759143 host.go:66] Checking if "ha-000694" exists ...
	I0908 14:22:19.554001  759143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:22:19.554045  759143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-000694
	I0908 14:22:19.577582  759143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/ha-000694/id_rsa Username:docker}
	I0908 14:22:19.669832  759143 ssh_runner.go:195] Run: systemctl --version
	I0908 14:22:19.675345  759143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:22:19.687882  759143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:22:19.761039  759143 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-08 14:22:19.751046593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:22:19.761629  759143 kubeconfig.go:125] found "ha-000694" server: "https://192.168.49.254:8443"
	I0908 14:22:19.761664  759143 api_server.go:166] Checking apiserver status ...
	I0908 14:22:19.761712  759143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:22:19.773427  759143 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1444/cgroup
	I0908 14:22:19.782983  759143 api_server.go:182] apiserver freezer: "12:freezer:/docker/44fd5d83902d6a517863cc5010e237cff200f38ccc44362ad20688634abc6df0/crio/crio-c430fa359c3f5054fdefd85a57501c16d7c531134c56b186a774c51abd7dd9dd"
	I0908 14:22:19.783095  759143 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/44fd5d83902d6a517863cc5010e237cff200f38ccc44362ad20688634abc6df0/crio/crio-c430fa359c3f5054fdefd85a57501c16d7c531134c56b186a774c51abd7dd9dd/freezer.state
	I0908 14:22:19.792926  759143 api_server.go:204] freezer state: "THAWED"
	I0908 14:22:19.792964  759143 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 14:22:19.803001  759143 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 14:22:19.803079  759143 status.go:463] ha-000694 apiserver status = Running (err=<nil>)
	I0908 14:22:19.803097  759143 status.go:176] ha-000694 status: &{Name:ha-000694 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:22:19.803116  759143 status.go:174] checking status of ha-000694-m02 ...
	I0908 14:22:19.803466  759143 cli_runner.go:164] Run: docker container inspect ha-000694-m02 --format={{.State.Status}}
	I0908 14:22:19.821800  759143 status.go:371] ha-000694-m02 host status = "Stopped" (err=<nil>)
	I0908 14:22:19.821826  759143 status.go:384] host is not running, skipping remaining checks
	I0908 14:22:19.821833  759143 status.go:176] ha-000694-m02 status: &{Name:ha-000694-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:22:19.821855  759143 status.go:174] checking status of ha-000694-m03 ...
	I0908 14:22:19.822179  759143 cli_runner.go:164] Run: docker container inspect ha-000694-m03 --format={{.State.Status}}
	I0908 14:22:19.843027  759143 status.go:371] ha-000694-m03 host status = "Running" (err=<nil>)
	I0908 14:22:19.843056  759143 host.go:66] Checking if "ha-000694-m03" exists ...
	I0908 14:22:19.843378  759143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-000694-m03
	I0908 14:22:19.862847  759143 host.go:66] Checking if "ha-000694-m03" exists ...
	I0908 14:22:19.863172  759143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:22:19.863222  759143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-000694-m03
	I0908 14:22:19.881582  759143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33553 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/ha-000694-m03/id_rsa Username:docker}
	I0908 14:22:19.969435  759143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:22:19.989124  759143 kubeconfig.go:125] found "ha-000694" server: "https://192.168.49.254:8443"
	I0908 14:22:19.989152  759143 api_server.go:166] Checking apiserver status ...
	I0908 14:22:19.989193  759143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:22:20.011933  759143 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	I0908 14:22:20.024273  759143 api_server.go:182] apiserver freezer: "12:freezer:/docker/699948af16fdbd5724b59f044707554f353085e2521d94fdd293ef10053e8c5c/crio/crio-6503f6fd8e5f692f3d767da1a0cee4b60a00397ca0820bb1af8a05e87910b394"
	I0908 14:22:20.024355  759143 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/699948af16fdbd5724b59f044707554f353085e2521d94fdd293ef10053e8c5c/crio/crio-6503f6fd8e5f692f3d767da1a0cee4b60a00397ca0820bb1af8a05e87910b394/freezer.state
	I0908 14:22:20.034493  759143 api_server.go:204] freezer state: "THAWED"
	I0908 14:22:20.034526  759143 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 14:22:20.042915  759143 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 14:22:20.042951  759143 status.go:463] ha-000694-m03 apiserver status = Running (err=<nil>)
	I0908 14:22:20.042963  759143 status.go:176] ha-000694-m03 status: &{Name:ha-000694-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:22:20.042993  759143 status.go:174] checking status of ha-000694-m04 ...
	I0908 14:22:20.043322  759143 cli_runner.go:164] Run: docker container inspect ha-000694-m04 --format={{.State.Status}}
	I0908 14:22:20.061358  759143 status.go:371] ha-000694-m04 host status = "Running" (err=<nil>)
	I0908 14:22:20.061384  759143 host.go:66] Checking if "ha-000694-m04" exists ...
	I0908 14:22:20.061701  759143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-000694-m04
	I0908 14:22:20.080214  759143 host.go:66] Checking if "ha-000694-m04" exists ...
	I0908 14:22:20.080620  759143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:22:20.080700  759143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-000694-m04
	I0908 14:22:20.099391  759143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/ha-000694-m04/id_rsa Username:docker}
	I0908 14:22:20.193166  759143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:22:20.205858  759143 status.go:176] ha-000694-m04 status: &{Name:ha-000694-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 node start m02 --alsologtostderr -v 5
E0908 14:22:41.753357  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:22:41.759802  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:22:41.771269  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:22:41.792697  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:22:41.834233  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:22:41.915614  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:22:42.077942  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:22:42.399602  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:22:43.041586  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:22:44.323025  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:22:46.884613  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 node start m02 --alsologtostderr -v 5: (28.131895636s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5: (1.407419246s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.260610926s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (141.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 node list --alsologtostderr -v 5
E0908 14:22:52.005966  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 stop --alsologtostderr -v 5
E0908 14:23:02.247452  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 stop --alsologtostderr -v 5: (26.821365046s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 start --wait true --alsologtostderr -v 5
E0908 14:23:22.728885  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:23:37.816842  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:24:03.691011  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 start --wait true --alsologtostderr -v 5: (1m54.569439919s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (141.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 node delete m03 --alsologtostderr -v 5
E0908 14:25:25.613288  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 node delete m03 --alsologtostderr -v 5: (12.202065271s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 stop --alsologtostderr -v 5: (35.624102683s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5: exit status 7 (111.662285ms)

                                                
                                                
-- stdout --
	ha-000694
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-000694-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-000694-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:26:03.062718  773128 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:26:03.062829  773128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:26:03.062840  773128 out.go:374] Setting ErrFile to fd 2...
	I0908 14:26:03.062846  773128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:26:03.063082  773128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 14:26:03.063277  773128 out.go:368] Setting JSON to false
	I0908 14:26:03.063326  773128 mustload.go:65] Loading cluster: ha-000694
	I0908 14:26:03.063394  773128 notify.go:220] Checking for updates...
	I0908 14:26:03.064353  773128 config.go:182] Loaded profile config "ha-000694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:26:03.064388  773128 status.go:174] checking status of ha-000694 ...
	I0908 14:26:03.064938  773128 cli_runner.go:164] Run: docker container inspect ha-000694 --format={{.State.Status}}
	I0908 14:26:03.082452  773128 status.go:371] ha-000694 host status = "Stopped" (err=<nil>)
	I0908 14:26:03.082479  773128 status.go:384] host is not running, skipping remaining checks
	I0908 14:26:03.082502  773128 status.go:176] ha-000694 status: &{Name:ha-000694 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:26:03.082531  773128 status.go:174] checking status of ha-000694-m02 ...
	I0908 14:26:03.082851  773128 cli_runner.go:164] Run: docker container inspect ha-000694-m02 --format={{.State.Status}}
	I0908 14:26:03.104605  773128 status.go:371] ha-000694-m02 host status = "Stopped" (err=<nil>)
	I0908 14:26:03.104630  773128 status.go:384] host is not running, skipping remaining checks
	I0908 14:26:03.104638  773128 status.go:176] ha-000694-m02 status: &{Name:ha-000694-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:26:03.104669  773128 status.go:174] checking status of ha-000694-m04 ...
	I0908 14:26:03.104984  773128 cli_runner.go:164] Run: docker container inspect ha-000694-m04 --format={{.State.Status}}
	I0908 14:26:03.122362  773128 status.go:371] ha-000694-m04 host status = "Stopped" (err=<nil>)
	I0908 14:26:03.122387  773128 status.go:384] host is not running, skipping remaining checks
	I0908 14:26:03.122394  773128 status.go:176] ha-000694-m04 status: &{Name:ha-000694-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (90.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m29.746485006s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (90.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 node add --control-plane --alsologtostderr -v 5
E0908 14:27:41.753302  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:09.454640  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:28:37.817038  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-000694 node add --control-plane --alsologtostderr -v 5: (1m16.854452581s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-000694 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.021848389s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-850328 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-850328 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m23.943111896s)
--- PASS: TestJSONOutput/start/Command (83.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-850328 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-850328 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-850328 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-850328 --output=json --user=testUser: (5.817121453s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-676972 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-676972 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (98.440728ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"af794774-3c72-4502-9941-0ca1e2088f21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-676972] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"18e5e6a2-8ecc-4b57-b6db-6e4500f32a11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21504"}}
	{"specversion":"1.0","id":"f5ca660e-dbcd-41ad-af14-58a10c6fd498","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"111881c2-075c-4841-8c18-899182598b3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig"}}
	{"specversion":"1.0","id":"5613009f-a653-4cf3-9efc-12cb895f988a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube"}}
	{"specversion":"1.0","id":"1c7932f1-0fb6-4905-947d-7a3cd09bebdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"16852190-7c5e-4279-a79d-b1bb2f52f829","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7e8f43e7-d3f6-4e29-a37c-8d1822bd1c6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-676972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-676972
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-513372 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-513372 --network=: (38.262391989s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-513372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-513372
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-513372: (2.156432372s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.44s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-189274 --network=bridge
E0908 14:31:40.884262  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-189274 --network=bridge: (36.568856926s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-189274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-189274
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-189274: (2.004664704s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.60s)

                                                
                                    
x
+
TestKicExistingNetwork (33.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 14:31:56.653712  707279 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 14:31:56.669382  707279 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 14:31:56.670207  707279 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 14:31:56.670245  707279 cli_runner.go:164] Run: docker network inspect existing-network
W0908 14:31:56.685958  707279 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 14:31:56.685988  707279 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 14:31:56.686003  707279 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 14:31:56.686104  707279 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 14:31:56.703630  707279 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-18df8eb81e6c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:87:32:85:20:15} reservation:<nil>}
I0908 14:31:56.703988  707279 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400193de80}
I0908 14:31:56.704060  707279 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 14:31:56.704627  707279 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 14:31:56.765745  707279 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-771224 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-771224 --network=existing-network: (31.498918738s)
helpers_test.go:175: Cleaning up "existing-network-771224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-771224
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-771224: (2.018499323s)
I0908 14:32:30.299732  707279 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.66s)

                                                
                                    
x
+
TestKicCustomSubnet (36.36s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-296918 --subnet=192.168.60.0/24
E0908 14:32:41.752637  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-296918 --subnet=192.168.60.0/24: (34.252604429s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-296918 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-296918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-296918
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-296918: (2.085389343s)
--- PASS: TestKicCustomSubnet (36.36s)

                                                
                                    
x
+
TestKicStaticIP (37.49s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-587700 --static-ip=192.168.200.200
E0908 14:33:37.820911  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-587700 --static-ip=192.168.200.200: (35.184543843s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-587700 ip
helpers_test.go:175: Cleaning up "static-ip-587700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-587700
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-587700: (2.153419186s)
--- PASS: TestKicStaticIP (37.49s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.62s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-537379 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-537379 --driver=docker  --container-runtime=crio: (34.01441979s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-539980 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-539980 --driver=docker  --container-runtime=crio: (32.233120902s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-537379
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-539980
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-539980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-539980
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-539980: (2.074459567s)
helpers_test.go:175: Cleaning up "first-537379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-537379
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-537379: (1.92979628s)
--- PASS: TestMinikubeProfile (71.62s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-448018 --memory=3072 --mount-string /tmp/TestMountStartserial402906961/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-448018 --memory=3072 --mount-string /tmp/TestMountStartserial402906961/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.806076421s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-448018 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-450069 --memory=3072 --mount-string /tmp/TestMountStartserial402906961/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-450069 --memory=3072 --mount-string /tmp/TestMountStartserial402906961/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.343369881s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-450069 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-448018 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-448018 --alsologtostderr -v=5: (1.640179431s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-450069 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-450069
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-450069: (1.200357719s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-450069
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-450069: (6.544127058s)
--- PASS: TestMountStart/serial/RestartStopped (7.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-450069 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-532627 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-532627 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m15.878785053s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- rollout status deployment/busybox
E0908 14:37:41.753333  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-532627 -- rollout status deployment/busybox: (4.371581327s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- exec busybox-7b57f96db7-hmnfc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- exec busybox-7b57f96db7-tn8v6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- exec busybox-7b57f96db7-hmnfc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- exec busybox-7b57f96db7-tn8v6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- exec busybox-7b57f96db7-hmnfc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- exec busybox-7b57f96db7-tn8v6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- exec busybox-7b57f96db7-hmnfc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- exec busybox-7b57f96db7-hmnfc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- exec busybox-7b57f96db7-tn8v6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-532627 -- exec busybox-7b57f96db7-tn8v6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-532627 -v=5 --alsologtostderr
E0908 14:38:37.816724  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-532627 -v=5 --alsologtostderr: (55.585413031s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-532627 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp testdata/cp-test.txt multinode-532627:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp multinode-532627:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2394961109/001/cp-test_multinode-532627.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp multinode-532627:/home/docker/cp-test.txt multinode-532627-m02:/home/docker/cp-test_multinode-532627_multinode-532627-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m02 "sudo cat /home/docker/cp-test_multinode-532627_multinode-532627-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp multinode-532627:/home/docker/cp-test.txt multinode-532627-m03:/home/docker/cp-test_multinode-532627_multinode-532627-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m03 "sudo cat /home/docker/cp-test_multinode-532627_multinode-532627-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp testdata/cp-test.txt multinode-532627-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp multinode-532627-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2394961109/001/cp-test_multinode-532627-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp multinode-532627-m02:/home/docker/cp-test.txt multinode-532627:/home/docker/cp-test_multinode-532627-m02_multinode-532627.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627 "sudo cat /home/docker/cp-test_multinode-532627-m02_multinode-532627.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp multinode-532627-m02:/home/docker/cp-test.txt multinode-532627-m03:/home/docker/cp-test_multinode-532627-m02_multinode-532627-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m03 "sudo cat /home/docker/cp-test_multinode-532627-m02_multinode-532627-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp testdata/cp-test.txt multinode-532627-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp multinode-532627-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2394961109/001/cp-test_multinode-532627-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp multinode-532627-m03:/home/docker/cp-test.txt multinode-532627:/home/docker/cp-test_multinode-532627-m03_multinode-532627.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627 "sudo cat /home/docker/cp-test_multinode-532627-m03_multinode-532627.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 cp multinode-532627-m03:/home/docker/cp-test.txt multinode-532627-m02:/home/docker/cp-test_multinode-532627-m03_multinode-532627-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 ssh -n multinode-532627-m02 "sudo cat /home/docker/cp-test_multinode-532627-m03_multinode-532627-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-532627 node stop m03: (1.224235361s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-532627 status: exit status 7 (533.945464ms)

                                                
                                                
-- stdout --
	multinode-532627
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-532627-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-532627-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-532627 status --alsologtostderr: exit status 7 (533.854863ms)

                                                
                                                
-- stdout --
	multinode-532627
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-532627-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-532627-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:38:55.428325  826388 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:38:55.428557  826388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:38:55.428591  826388 out.go:374] Setting ErrFile to fd 2...
	I0908 14:38:55.428612  826388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:38:55.428898  826388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 14:38:55.429116  826388 out.go:368] Setting JSON to false
	I0908 14:38:55.429200  826388 mustload.go:65] Loading cluster: multinode-532627
	I0908 14:38:55.429273  826388 notify.go:220] Checking for updates...
	I0908 14:38:55.429658  826388 config.go:182] Loaded profile config "multinode-532627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:38:55.429706  826388 status.go:174] checking status of multinode-532627 ...
	I0908 14:38:55.430331  826388 cli_runner.go:164] Run: docker container inspect multinode-532627 --format={{.State.Status}}
	I0908 14:38:55.451752  826388 status.go:371] multinode-532627 host status = "Running" (err=<nil>)
	I0908 14:38:55.451775  826388 host.go:66] Checking if "multinode-532627" exists ...
	I0908 14:38:55.452119  826388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-532627
	I0908 14:38:55.481897  826388 host.go:66] Checking if "multinode-532627" exists ...
	I0908 14:38:55.482235  826388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:38:55.482281  826388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-532627
	I0908 14:38:55.500865  826388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33663 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/multinode-532627/id_rsa Username:docker}
	I0908 14:38:55.593732  826388 ssh_runner.go:195] Run: systemctl --version
	I0908 14:38:55.598304  826388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:38:55.610146  826388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:38:55.681365  826388 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 14:38:55.671907995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:38:55.681917  826388 kubeconfig.go:125] found "multinode-532627" server: "https://192.168.67.2:8443"
	I0908 14:38:55.681955  826388 api_server.go:166] Checking apiserver status ...
	I0908 14:38:55.682009  826388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 14:38:55.693326  826388 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1429/cgroup
	I0908 14:38:55.713962  826388 api_server.go:182] apiserver freezer: "12:freezer:/docker/18681324073f7838efeb6039248ef9ea1f35f524b07c6b70de1c56448da8100f/crio/crio-b31311a48d4ba6ad381ffb7263791362dbf5c80454426ab954d99099517e3c59"
	I0908 14:38:55.714051  826388 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/18681324073f7838efeb6039248ef9ea1f35f524b07c6b70de1c56448da8100f/crio/crio-b31311a48d4ba6ad381ffb7263791362dbf5c80454426ab954d99099517e3c59/freezer.state
	I0908 14:38:55.723618  826388 api_server.go:204] freezer state: "THAWED"
	I0908 14:38:55.723648  826388 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 14:38:55.731861  826388 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 14:38:55.731889  826388 status.go:463] multinode-532627 apiserver status = Running (err=<nil>)
	I0908 14:38:55.731901  826388 status.go:176] multinode-532627 status: &{Name:multinode-532627 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:38:55.731947  826388 status.go:174] checking status of multinode-532627-m02 ...
	I0908 14:38:55.732328  826388 cli_runner.go:164] Run: docker container inspect multinode-532627-m02 --format={{.State.Status}}
	I0908 14:38:55.749383  826388 status.go:371] multinode-532627-m02 host status = "Running" (err=<nil>)
	I0908 14:38:55.749410  826388 host.go:66] Checking if "multinode-532627-m02" exists ...
	I0908 14:38:55.749718  826388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-532627-m02
	I0908 14:38:55.768527  826388 host.go:66] Checking if "multinode-532627-m02" exists ...
	I0908 14:38:55.768847  826388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 14:38:55.768905  826388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-532627-m02
	I0908 14:38:55.786222  826388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33668 SSHKeyPath:/home/jenkins/minikube-integration/21504-705422/.minikube/machines/multinode-532627-m02/id_rsa Username:docker}
	I0908 14:38:55.877925  826388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 14:38:55.889479  826388 status.go:176] multinode-532627-m02 status: &{Name:multinode-532627-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:38:55.889516  826388 status.go:174] checking status of multinode-532627-m03 ...
	I0908 14:38:55.889823  826388 cli_runner.go:164] Run: docker container inspect multinode-532627-m03 --format={{.State.Status}}
	I0908 14:38:55.907353  826388 status.go:371] multinode-532627-m03 host status = "Stopped" (err=<nil>)
	I0908 14:38:55.907375  826388 status.go:384] host is not running, skipping remaining checks
	I0908 14:38:55.907382  826388 status.go:176] multinode-532627-m03 status: &{Name:multinode-532627-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-532627 node start m03 -v=5 --alsologtostderr: (7.414607297s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-532627
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-532627
E0908 14:39:04.817314  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-532627: (24.802872491s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-532627 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-532627 --wait=true -v=5 --alsologtostderr: (49.967218804s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-532627
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-532627 node delete m03: (4.747525627s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-532627 stop: (23.625454432s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-532627 status: exit status 7 (103.328124ms)

                                                
                                                
-- stdout --
	multinode-532627
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-532627-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-532627 status --alsologtostderr: exit status 7 (94.26538ms)

                                                
                                                
-- stdout --
	multinode-532627
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-532627-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:40:48.221511  834275 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:40:48.222009  834275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:40:48.222052  834275 out.go:374] Setting ErrFile to fd 2...
	I0908 14:40:48.222073  834275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:40:48.222862  834275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 14:40:48.223198  834275 out.go:368] Setting JSON to false
	I0908 14:40:48.223252  834275 mustload.go:65] Loading cluster: multinode-532627
	I0908 14:40:48.224013  834275 config.go:182] Loaded profile config "multinode-532627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:40:48.224044  834275 status.go:174] checking status of multinode-532627 ...
	I0908 14:40:48.224836  834275 cli_runner.go:164] Run: docker container inspect multinode-532627 --format={{.State.Status}}
	I0908 14:40:48.226390  834275 notify.go:220] Checking for updates...
	I0908 14:40:48.242496  834275 status.go:371] multinode-532627 host status = "Stopped" (err=<nil>)
	I0908 14:40:48.242524  834275 status.go:384] host is not running, skipping remaining checks
	I0908 14:40:48.242531  834275 status.go:176] multinode-532627 status: &{Name:multinode-532627 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 14:40:48.242556  834275 status.go:174] checking status of multinode-532627-m02 ...
	I0908 14:40:48.242863  834275 cli_runner.go:164] Run: docker container inspect multinode-532627-m02 --format={{.State.Status}}
	I0908 14:40:48.264894  834275 status.go:371] multinode-532627-m02 host status = "Stopped" (err=<nil>)
	I0908 14:40:48.264915  834275 status.go:384] host is not running, skipping remaining checks
	I0908 14:40:48.264922  834275 status.go:176] multinode-532627-m02 status: &{Name:multinode-532627-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-532627 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-532627 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (52.56886936s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-532627 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-532627
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-532627-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-532627-m02 --driver=docker  --container-runtime=crio: exit status 14 (95.691937ms)

                                                
                                                
-- stdout --
	* [multinode-532627-m02] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-532627-m02' is duplicated with machine name 'multinode-532627-m02' in profile 'multinode-532627'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-532627-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-532627-m03 --driver=docker  --container-runtime=crio: (32.571301153s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-532627
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-532627: exit status 80 (340.515147ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-532627 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-532627-m03 already exists in multinode-532627-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-532627-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-532627-m03: (1.993173844s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.06s)

                                                
                                    
x
+
TestPreload (105.3s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-916537 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0908 14:42:41.752778  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-916537 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m0.783146209s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-916537 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-916537 image pull gcr.io/k8s-minikube/busybox: (3.51398744s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-916537
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-916537: (5.875988506s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-916537 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0908 14:43:37.816229  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-916537 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (32.502250208s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-916537 image list
helpers_test.go:175: Cleaning up "test-preload-916537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-916537
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-916537: (2.383959663s)
--- PASS: TestPreload (105.30s)

                                                
                                    
x
+
TestScheduledStopUnix (109.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-778052 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-778052 --memory=3072 --driver=docker  --container-runtime=crio: (32.473046265s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778052 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-778052 -n scheduled-stop-778052
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778052 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 14:44:38.889520  707279 retry.go:31] will retry after 58.587µs: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.889666  707279 retry.go:31] will retry after 108.505µs: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.890226  707279 retry.go:31] will retry after 251.004µs: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.890795  707279 retry.go:31] will retry after 448.927µs: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.891895  707279 retry.go:31] will retry after 278.256µs: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.892266  707279 retry.go:31] will retry after 604.756µs: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.892991  707279 retry.go:31] will retry after 938.004µs: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.894149  707279 retry.go:31] will retry after 2.274811ms: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.897411  707279 retry.go:31] will retry after 2.468134ms: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.900664  707279 retry.go:31] will retry after 4.307305ms: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.906013  707279 retry.go:31] will retry after 6.029582ms: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.912192  707279 retry.go:31] will retry after 4.895677ms: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.917492  707279 retry.go:31] will retry after 16.921344ms: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.934889  707279 retry.go:31] will retry after 28.718948ms: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.964175  707279 retry.go:31] will retry after 15.049118ms: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
I0908 14:44:38.979356  707279 retry.go:31] will retry after 56.471527ms: open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/scheduled-stop-778052/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778052 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-778052 -n scheduled-stop-778052
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-778052
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-778052 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-778052
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-778052: exit status 7 (67.415315ms)

                                                
                                                
-- stdout --
	scheduled-stop-778052
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-778052 -n scheduled-stop-778052
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-778052 -n scheduled-stop-778052: exit status 7 (73.629061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-778052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-778052
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-778052: (5.023934172s)
--- PASS: TestScheduledStopUnix (109.16s)

                                                
                                    
x
+
TestInsufficientStorage (10.52s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-413153 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-413153 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.058321128s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3ecf0d60-93ae-4aa3-998b-d8a520a4d614","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-413153] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d893597-342e-481d-acc4-0c0c668ebb87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21504"}}
	{"specversion":"1.0","id":"36b50b61-44e3-4898-832e-6c0e0cf08ada","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9cd5b02a-cd81-41d2-83a4-d40186f74d3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig"}}
	{"specversion":"1.0","id":"f5479998-ddc6-44c4-92c5-b8c1529d0ee3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube"}}
	{"specversion":"1.0","id":"c0f13a45-9089-4d9e-a9b2-7fb0b6453cc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"93f51617-8c34-4a87-9bab-11239716792e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6951cdd6-2339-466e-b4cf-a3230522d654","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3df35d8b-a68e-4324-9cf8-2a77bc8d6c72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ff8e23cb-d334-48b3-8d3e-9a03b0965e79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d119ad9-dff3-4a62-8dfb-01b15473a479","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6da5a6b2-3721-472a-9ae8-f888ff770ca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-413153\" primary control-plane node in \"insufficient-storage-413153\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fdc133bb-ba71-4373-9f04-c46066df077e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed7576c2-6ca4-4271-9631-66454ebf91eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"32b88150-abe1-4b21-982a-0c20aa195ab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-413153 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-413153 --output=json --layout=cluster: exit status 7 (287.727558ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-413153","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-413153","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 14:46:03.381932  851772 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-413153" does not appear in /home/jenkins/minikube-integration/21504-705422/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-413153 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-413153 --output=json --layout=cluster: exit status 7 (278.624592ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-413153","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-413153","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 14:46:03.661321  851835 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-413153" does not appear in /home/jenkins/minikube-integration/21504-705422/kubeconfig
	E0908 14:46:03.671256  851835 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/insufficient-storage-413153/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-413153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-413153
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-413153: (1.894458406s)
--- PASS: TestInsufficientStorage (10.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (60.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2556492175 start -p running-upgrade-349290 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2556492175 start -p running-upgrade-349290 --memory=3072 --vm-driver=docker  --container-runtime=crio: (38.553176926s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-349290 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-349290 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.303382987s)
helpers_test.go:175: Cleaning up "running-upgrade-349290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-349290
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-349290: (2.268697622s)
--- PASS: TestRunningBinaryUpgrade (60.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (196.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-330875 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0908 14:47:41.752608  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-330875 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.042074855s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-330875
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-330875: (1.260330138s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-330875 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-330875 status --format={{.Host}}: exit status 7 (101.241205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-330875 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-330875 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m44.176270876s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-330875 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-330875 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-330875 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (124.110702ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-330875] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-330875
	    minikube start -p kubernetes-upgrade-330875 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3308752 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-330875 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-330875 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-330875 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.255803201s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-330875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-330875
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-330875: (2.510441147s)
--- PASS: TestKubernetesUpgrade (196.59s)

                                                
                                    
x
+
TestMissingContainerUpgrade (127.59s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2651559447 start -p missing-upgrade-386063 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2651559447 start -p missing-upgrade-386063 --memory=3072 --driver=docker  --container-runtime=crio: (1m7.25520431s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-386063
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-386063
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-386063 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-386063 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (56.174623736s)
helpers_test.go:175: Cleaning up "missing-upgrade-386063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-386063
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-386063: (2.168472202s)
--- PASS: TestMissingContainerUpgrade (127.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-450269 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-450269 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (87.195449ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-450269] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (51.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-450269 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-450269 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (50.714384907s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-450269 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (51.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-450269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-450269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.591414497s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-450269 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-450269 status -o json: exit status 2 (492.104771ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-450269","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-450269
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-450269: (2.193438356s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-450269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-450269 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.185714617s)
--- PASS: TestNoKubernetes/serial/Start (9.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-450269 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-450269 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.391176ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-450269
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-450269: (1.200679982s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-450269 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-450269 --driver=docker  --container-runtime=crio: (6.684456446s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-450269 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-450269 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.621339ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (62.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.700997145 start -p stopped-upgrade-859720 --memory=3072 --vm-driver=docker  --container-runtime=crio
E0908 14:48:20.885583  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:48:37.818319  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.700997145 start -p stopped-upgrade-859720 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.008065107s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.700997145 -p stopped-upgrade-859720 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.700997145 -p stopped-upgrade-859720 stop: (1.243581577s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-859720 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-859720 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.178084804s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (62.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-859720
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-859720: (1.214186923s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestPause/serial/Start (86.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-353457 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-353457 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m26.735985457s)
--- PASS: TestPause/serial/Start (86.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-811617 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-811617 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (200.504919ms)

                                                
                                                
-- stdout --
	* [false-811617] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21504
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 14:51:29.801039  885519 out.go:360] Setting OutFile to fd 1 ...
	I0908 14:51:29.801150  885519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:51:29.801159  885519 out.go:374] Setting ErrFile to fd 2...
	I0908 14:51:29.801165  885519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 14:51:29.801404  885519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21504-705422/.minikube/bin
	I0908 14:51:29.802511  885519 out.go:368] Setting JSON to false
	I0908 14:51:29.803445  885519 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":16440,"bootTime":1757326650,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0908 14:51:29.803520  885519 start.go:140] virtualization:  
	I0908 14:51:29.807056  885519 out.go:179] * [false-811617] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 14:51:29.811011  885519 out.go:179]   - MINIKUBE_LOCATION=21504
	I0908 14:51:29.811094  885519 notify.go:220] Checking for updates...
	I0908 14:51:29.817351  885519 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 14:51:29.820275  885519 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21504-705422/kubeconfig
	I0908 14:51:29.823123  885519 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21504-705422/.minikube
	I0908 14:51:29.826580  885519 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 14:51:29.829484  885519 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 14:51:29.832893  885519 config.go:182] Loaded profile config "pause-353457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 14:51:29.832992  885519 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 14:51:29.865625  885519 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 14:51:29.865752  885519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 14:51:29.930770  885519 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 14:51:29.921526704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 14:51:29.930873  885519 docker.go:318] overlay module found
	I0908 14:51:29.935816  885519 out.go:179] * Using the docker driver based on user configuration
	I0908 14:51:29.938582  885519 start.go:304] selected driver: docker
	I0908 14:51:29.938602  885519 start.go:918] validating driver "docker" against <nil>
	I0908 14:51:29.938615  885519 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 14:51:29.942122  885519 out.go:203] 
	W0908 14:51:29.944938  885519 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0908 14:51:29.947730  885519 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-811617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-811617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-811617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:51:02 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-353457
contexts:
- context:
cluster: pause-353457
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:51:02 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-353457
name: pause-353457
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-353457
user:
client-certificate: /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/pause-353457/client.crt
client-key: /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/pause-353457/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-811617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-811617"

                                                
                                                
----------------------- debugLogs end: false-811617 [took: 3.4821356s] --------------------------------
helpers_test.go:175: Cleaning up "false-811617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-811617
--- PASS: TestNetworkPlugins/group/false (3.83s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-353457 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-353457 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.8374639s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.90s)

                                                
                                    
x
+
TestPause/serial/Pause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-353457 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-353457 --alsologtostderr -v=5: (1.026333099s)
--- PASS: TestPause/serial/Pause (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-353457 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-353457 --output=json --layout=cluster: exit status 2 (394.079212ms)

                                                
                                                
-- stdout --
	{"Name":"pause-353457","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-353457","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-353457 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.33s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-353457 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-353457 --alsologtostderr -v=5: (1.329904199s)
--- PASS: TestPause/serial/PauseAgain (1.33s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-353457 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-353457 --alsologtostderr -v=5: (3.253262853s)
--- PASS: TestPause/serial/DeletePaused (3.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-353457
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-353457: exit status 1 (24.299203ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-353457: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-365369 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0908 14:53:37.816580  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-365369 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.278562292s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-365369 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c9d3d169-600e-43fe-aca9-43a9849092e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c9d3d169-600e-43fe-aca9-43a9849092e2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005629878s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-365369 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-365369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-365369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.011855942s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-365369 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-365369 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-365369 --alsologtostderr -v=3: (11.9340906s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-365369 -n old-k8s-version-365369
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-365369 -n old-k8s-version-365369: exit status 7 (84.746516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-365369 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-365369 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-365369 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.144931581s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-365369 -n old-k8s-version-365369
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vrcdj" [a5db6db7-03c9-486a-9c3c-79463b69b9fe] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004220043s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vrcdj" [a5db6db7-03c9-486a-9c3c-79463b69b9fe] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00330195s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-365369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-365369 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-365369 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-365369 -n old-k8s-version-365369
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-365369 -n old-k8s-version-365369: exit status 2 (324.390482ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-365369 -n old-k8s-version-365369
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-365369 -n old-k8s-version-365369: exit status 2 (339.104902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-365369 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-365369 -n old-k8s-version-365369
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-365369 -n old-k8s-version-365369
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-740483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 14:55:44.818891  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-740483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m9.871160068s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-396213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-396213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m21.699758609s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-740483 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [afd6cb17-6786-40a9-a56f-7309539110a6] Pending
helpers_test.go:352: "busybox" [afd6cb17-6786-40a9-a56f-7309539110a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [afd6cb17-6786-40a9-a56f-7309539110a6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003952489s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-740483 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-740483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-740483 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.541830973s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-740483 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-740483 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-740483 --alsologtostderr -v=3: (12.232740331s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-740483 -n no-preload-740483
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-740483 -n no-preload-740483: exit status 7 (95.255006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-740483 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-740483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 14:57:41.753308  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-740483 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (49.472291889s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-740483 -n no-preload-740483
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-396213 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e0049bc1-1c5d-43ad-9505-97b1887acb31] Pending
helpers_test.go:352: "busybox" [e0049bc1-1c5d-43ad-9505-97b1887acb31] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e0049bc1-1c5d-43ad-9505-97b1887acb31] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004087996s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-396213 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7lwpc" [aba12596-e71c-4b10-b549-a72949789691] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004785094s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-396213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-396213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.079231567s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-396213 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-396213 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-396213 --alsologtostderr -v=3: (12.049402108s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7lwpc" [aba12596-e71c-4b10-b549-a72949789691] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004761625s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-740483 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-740483 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-740483 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-740483 -n no-preload-740483
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-740483 -n no-preload-740483: exit status 2 (318.501465ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-740483 -n no-preload-740483
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-740483 -n no-preload-740483: exit status 2 (315.353672ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-740483 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-740483 -n no-preload-740483
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-740483 -n no-preload-740483
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-396213 -n embed-certs-396213
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-396213 -n embed-certs-396213: exit status 7 (111.46921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-396213 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (57.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-396213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-396213 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (57.091065614s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-396213 -n embed-certs-396213
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (57.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-690964 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 14:58:37.817115  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:59:06.349762  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:59:06.356297  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:59:06.367833  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:59:06.389188  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:59:06.430679  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:59:06.512212  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:59:06.673796  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:59:06.995475  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:59:07.637311  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:59:08.918767  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 14:59:11.480218  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-690964 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m26.075006866s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tsm6r" [70c40723-d67b-4445-993d-c7357a946502] Running
E0908 14:59:16.601881  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.010535688s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tsm6r" [70c40723-d67b-4445-993d-c7357a946502] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004195406s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-396213 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-396213 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-396213 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-396213 -n embed-certs-396213
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-396213 -n embed-certs-396213: exit status 2 (329.435846ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-396213 -n embed-certs-396213
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-396213 -n embed-certs-396213: exit status 2 (329.535138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-396213 --alsologtostderr -v=1
E0908 14:59:26.843493  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-396213 -n embed-certs-396213
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-396213 -n embed-certs-396213
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-799306 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-799306 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (41.290095531s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-690964 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e3008472-eb4b-419d-ac05-db9df50b2559] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0908 14:59:47.324802  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [e3008472-eb4b-419d-ac05-db9df50b2559] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004026013s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-690964 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-690964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-690964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.529697066s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-690964 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-690964 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-690964 --alsologtostderr -v=3: (12.30361638s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-690964 -n default-k8s-diff-port-690964
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-690964 -n default-k8s-diff-port-690964: exit status 7 (126.606778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-690964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-690964 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-690964 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (54.782894276s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-690964 -n default-k8s-diff-port-690964
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-799306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-799306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.001971546s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-799306 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-799306 --alsologtostderr -v=3: (1.231399879s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-799306 -n newest-cni-799306
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-799306 -n newest-cni-799306: exit status 7 (93.533927ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-799306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-799306 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 15:00:28.286912  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-799306 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (21.965645041s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-799306 -n newest-cni-799306
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-799306 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-799306 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-799306 -n newest-cni-799306
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-799306 -n newest-cni-799306: exit status 2 (368.584723ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-799306 -n newest-cni-799306
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-799306 -n newest-cni-799306: exit status 2 (343.073018ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-799306 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-799306 --alsologtostderr -v=1: (1.183955903s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-799306 -n newest-cni-799306
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-799306 -n newest-cni-799306
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m21.294398426s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d5n9p" [5799d2a5-627e-41be-a450-26c00a1558a5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0036552s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d5n9p" [5799d2a5-627e-41be-a450-26c00a1558a5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003669427s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-690964 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-690964 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-690964 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-690964 -n default-k8s-diff-port-690964
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-690964 -n default-k8s-diff-port-690964: exit status 2 (375.141266ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-690964 -n default-k8s-diff-port-690964
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-690964 -n default-k8s-diff-port-690964: exit status 2 (376.948938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-690964 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-690964 -n default-k8s-diff-port-690964
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-690964 -n default-k8s-diff-port-690964
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.87s)
E0908 15:07:47.080391  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:47.580254  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/kindnet-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:47.586725  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/kindnet-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:47.598955  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/kindnet-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:47.620620  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/kindnet-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:47.662201  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/kindnet-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:47.743625  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/kindnet-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:47.905519  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/kindnet-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:48.227015  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/kindnet-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:48.868932  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/kindnet-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0908 15:01:46.108487  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:46.114842  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:46.126188  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:46.147572  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:46.188965  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:46.270337  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:46.431931  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:46.754173  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:47.396485  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:48.678142  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:50.209029  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:51.239609  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:01:56.361134  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m23.352282479s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-811617 "pgrep -a kubelet"
I0908 15:02:05.844137  707279 config.go:182] Loaded profile config "auto-811617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-811617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qhtpj" [c18fe8f7-3308-45ba-b237-ec048db473c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 15:02:06.602553  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qhtpj" [c18fe8f7-3308-45ba-b237-ec048db473c1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.002847632s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-811617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0908 15:02:41.752924  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/functional-026028/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m6.138328356s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-p7l7g" [9b466535-c1fd-4671-b886-e187decf2477] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005158789s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-811617 "pgrep -a kubelet"
I0908 15:02:54.011083  707279 config.go:182] Loaded profile config "kindnet-811617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-811617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9dnzk" [c1029d00-a768-4dae-a358-934edcf11245] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9dnzk" [c1029d00-a768-4dae-a358-934edcf11245] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003450742s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-811617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0908 15:03:37.816456  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/addons-242829/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.541079346s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-6k6q5" [e09347c6-f99d-49a6-b58d-befe54879486] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-6k6q5" [e09347c6-f99d-49a6-b58d-befe54879486] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004599889s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-811617 "pgrep -a kubelet"
I0908 15:03:51.066069  707279 config.go:182] Loaded profile config "calico-811617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-811617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-642xv" [5e2d2aa7-425e-4e67-95a9-abcb19653bd6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-642xv" [5e2d2aa7-425e-4e67-95a9-abcb19653bd6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003564003s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-811617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0908 15:04:29.971426  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:04:34.050387  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/old-k8s-version-365369/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m23.086846865s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-811617 "pgrep -a kubelet"
I0908 15:04:44.204613  707279 config.go:182] Loaded profile config "custom-flannel-811617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-811617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tn7j2" [3e27e358-b88a-4508-ab23-6b791833dbd4] Pending
E0908 15:04:45.717205  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:04:45.723496  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:04:45.734780  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:04:45.756171  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:04:45.797566  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:04:45.879310  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:04:46.041295  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:04:46.363752  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:04:47.005512  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:04:48.287559  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-tn7j2" [3e27e358-b88a-4508-ab23-6b791833dbd4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 15:04:50.848865  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-tn7j2" [3e27e358-b88a-4508-ab23-6b791833dbd4] Running
E0908 15:04:55.970616  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.003093797s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-811617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (109.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0908 15:05:26.693336  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/default-k8s-diff-port-690964/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m49.02489134s)
--- PASS: TestNetworkPlugins/group/flannel/Start (109.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-811617 "pgrep -a kubelet"
I0908 15:05:52.972666  707279 config.go:182] Loaded profile config "enable-default-cni-811617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-811617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mxsvz" [b4b48c40-222b-40d4-896a-5b96ef16d7ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mxsvz" [b4b48c40-222b-40d4-896a-5b96ef16d7ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00471443s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-811617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0908 15:06:46.108633  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:06.104846  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:06.111372  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:06.122810  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:06.144208  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:06.185597  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:06.266925  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:06.428616  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:06.750062  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:07.391985  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:08.673430  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-811617 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (47.146735409s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-fdcz6" [fe97fc04-76e5-47be-a5da-329df95f7ac0] Running
E0908 15:07:11.234861  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003664945s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-811617 "pgrep -a kubelet"
I0908 15:07:12.373284  707279 config.go:182] Loaded profile config "bridge-811617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-811617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-92jsc" [fef241a4-b51d-41f0-90b5-4719ee4d5311] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 15:07:13.812938  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/no-preload-740483/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 15:07:16.356760  707279 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/auto-811617/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-92jsc" [fef241a4-b51d-41f0-90b5-4719ee4d5311] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.007311044s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-811617 "pgrep -a kubelet"
I0908 15:07:16.895194  707279 config.go:182] Loaded profile config "flannel-811617": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-811617 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rpdzj" [12ab255a-9546-4451-b6dd-f3623c5caec4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rpdzj" [12ab255a-9546-4451-b6dd-f3623c5caec4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003876458s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-811617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-811617 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-811617 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    

Test skip (32/331)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.62s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-833805 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-833805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-833805
--- SKIP: TestDownloadOnlyKic (0.62s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-242829 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-707893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-707893
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-811617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-811617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-811617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:51:02 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-353457
contexts:
- context:
cluster: pause-353457
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:51:02 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-353457
name: pause-353457
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-353457
user:
client-certificate: /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/pause-353457/client.crt
client-key: /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/pause-353457/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-811617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-811617"

                                                
                                                
----------------------- debugLogs end: kubenet-811617 [took: 3.566053722s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-811617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-811617
--- SKIP: TestNetworkPlugins/group/kubenet (3.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-811617 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-811617" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21504-705422/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:51:02 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-353457
contexts:
- context:
cluster: pause-353457
extensions:
- extension:
last-update: Mon, 08 Sep 2025 14:51:02 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: pause-353457
name: pause-353457
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-353457
user:
client-certificate: /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/pause-353457/client.crt
client-key: /home/jenkins/minikube-integration/21504-705422/.minikube/profiles/pause-353457/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-811617

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-811617" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811617"

                                                
                                                
----------------------- debugLogs end: cilium-811617 [took: 3.926320023s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-811617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-811617
--- SKIP: TestNetworkPlugins/group/cilium (4.09s)

                                                
                                    
Copied to clipboard